Skip to main content

Xamarin.Native | Xamarin Document Scanner

Integration with Xamarin.Android and Xamarin.iOS#

The NuGet package ScanbotSDK.Xamarin is a universal package for Android and iOS. It contains the Xamarin Bindings for the Native Scanbot SDKs for Android and iOS and the Wrappers.

Namespaces#

The bindings for Native SDKs and the Wrappers are stored in different namespaces which are explained here.

Xamarin Wrapper#

  • Namespace for Android: ScanbotSDK.Xamarin.Android
  • Namespace for iOS: ScanbotSDK.Xamarin.iOS

The idea of the Scanbot SDK Xamarin Wrapper is to provide a unified and convenient API for iOS and Android. However, since not all Native SDK functionalities are available in the Wrapper namespace, you can use and call them as needed directly from the native namespaces.

Currently the following Package II functionalities are available for the Xamarin Wrapper classes:

  • Document detection on images
  • Image processing (cropping and perspective correction, rotating, etc)
  • Image filters
  • PDF creation
  • TIFF creation
  • Text Recognition (OCR)

Native SDKs#

Native SDKs for Android and iOS are provided as Xamarin Bindings Libraries and can be found in these namespaces:

  • Android: Net.Doo.Snap, IO.Scanbot.Sdk
  • iOS: ScanbotSDK.iOS

The Xamarin Bindings for the Native SDKs provide the most available functionality of the Scanbot SDK.

๐Ÿ‘‰ Documentation for Native SDKs classes can be found here:

Getting started#

Initialize SDK#

The Scanbot SDK must be initialized before usage. Make sure to run the initialization as early as possible. We recommend implementing the initialization in the Application class of the Android app, and in the AppDelegate class of the iOS app.

In the following examples we will use the convenient wrapper class SBSDK.

Android#

Add the following code to your Application class:

using ScanbotSDK.Xamarin.Android.Wrapper;
[Application(LargeHeap = true)]public class MainApplication : Application{    ...    public override void OnCreate()    {        ...        // You can pass null as licenseKey for trial mode. See the "License" section for more details.        SBSDK.Initialize(this, licenseKey, new SBSDKConfiguration { EnableLogging = true });        ....    }    ...}

Furthermore, it is highly recommended to add the flag LargeHeap = true to the Application attribute on the MainApplication since image processing is very memory-intensive.

iOS#

Add the following code to your AppDelegate class:

using ScanbotSDK.Xamarin.iOS.Wrapper;
[Register("AppDelegate")]public class AppDelegate : UIApplicationDelegate{    ...    public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions)    {        // You can pass null as licenseKey for trial mode. See the "License" section for more details.        SBSDK.Initialize(application, licenseKey, new SBSDKConfiguration { EnableLogging = true });        ....    }    ...}

Storage Encryption#

Scanbot SDK provides the ability to store the generated image files (JPG, PNG) and PDF files encrypted. This feature provides an additional level of security to the default secure storage locations of the native SDKs.

By default the file encryption is disabled. If you use ScanbotSDK wrapper, you have to pass the following config parameters on SDK initialization:

new SBSDKConfiguration{    Encryption = new SBSDKEncryption    {        Mode = EncryptionMode.AES256,        Password = "SomeSecretPa$$w0rdForFileEncryption"    }};

You can also use the native bindings to enable encryption. For Android:

var initializer = new IO.Scanbot.Sdk.ScanbotSDKInitializer();var processor = new AESEncryptedFileIOProcessor(    "SomeSecretPa$$w0rdForFileEncryption",    AESEncryptedFileIOProcessor.AESEncrypterMode.Aes256);initializer.UseFileEncryption(true, processor);

Or for iOS, you should first implement SBSDKStorageCrypting interface:

public class FormsEncryption : SBSDKStorageCrypting{    private string password;    private EncryptionMode mode;    SBSDKAESEncrypter Encrypter { get => new SBSDKAESEncrypter(password, mode.ToNative()); }
    public SBSDKEncrypter(SBSDKEncryption encryption)    {        password = encryption.Password;        mode = encryption.Mode;    }
    public override NSData DecryptData(NSData data)    {        return Encrypter.EncryptData(data);    }
    public override NSData EncryptData(NSData data)    {        return Encrypter.DecryptData(data);    }}

And then set an instance of that as the default encrypter:

ScanbotSDKUI.DefaultImageStoreEncrypter = encryption;ScanbotSDKUI.DefaultPDFEncrypter = encryption;

By activating the storage encryption the native Scanbot SDKs will use the builtin AES 128 or AES 256 encryption. All generated image files (JPG, PNG) including the preview image files, as well as the exported PDF files will be encrypted in memory and stored as encrypted data files on the flash storage of the device.

The Scanbot SDK derives the AES key from the given password, an internal salt value, and the internal number of iterations using the PBKDF2 function.

When applying image operations like cropping, rotation or image filters, Scanbot SDK will decrypt the image file in memory, apply the changes, encrypt and store it again.

Scanbot SDK UI Components#

RTU UI Components#

The Ready-To-Use UI (RTU UI) is a set of easy to integrate and customize high-level UI components (View Controllers for iOS and Activities for Android) for the most common tasks in Scanbot SDK. The design and behavior of these RTU UI Components are based on our many years of experience as well as the feedback from our SDK customers.

The following RTU UI Components and classes are currently provided:

Android:

  • Document Scanner - DocumentScannerActivity
  • Cropping - CroppingActivity
  • MRZ Scanner - MRZScannerActivity
  • Barcode and QR-Code Scanner - BarcodeScannerActivity & BatchBarcodeScannerActivity
  • Generic Document Recognizer - GenericDocumentRecognizerActivity

iOS:

  • Document Scanner - SBSDKUIDocumentScannerViewController
  • Cropping - SBSDKUICroppingViewController
  • MRZ Scanner - SBSDKUIMRZScannerViewController
  • Barcode and QR-Code Scanner - SBSDKUIBarcodeScannerViewController & SBSDKUIBarcodesBatchScannerViewController

For more details please see the corresponding API docs of the classes as well as our example app.

Customization of RTU UI:

The main idea of the RTU UI is to provide simple-to-integrate and simple-to-customize screen components. Due to this idea there are some limitations with the possibilities of customization:

  • UI: All colors and text resources (localization)
  • Behavior: Enable or disable features like Multi-Page Scanning, Auto Snapping, Flashlight

If you need more customization options you have to implement custom screens (View Controllers for iOS and Activities for Android) using our Classical UI Components.

RTU UI Examples:

Please see our example app scanbot-sdk-example-xamarin on GitHub.

Workflows#

The Workflow Components are also part of the RTU UI Components. A Workflow represents a set of multiple scanning steps. You can combine Document Scanning with QR code detection or MRZ recognition on an ID card image, for example. Workflow Steps can be run on a captured still image or a video frame, so a step can either be a live-detection or a still-image capturing step. You can validate each step result, present an error message to the user if the validation fails and restart a step.

The following predefined Workflow Step classes are provided:

Android:

  • ScanDocumentPageWorkflowStep - Specialized for capturing documents from high-res still images.
  • ScanBarCodeWorkflowStep - Recognition of Barcodes/QR codes on low-res video frames (live detection).
  • ScanMachineReadableZoneWorkflowStep - For scanning of ID cards or passports with MRZ recognition.
  • ScanDisabilityCertificateWorkflowStep - For recognition and data extraction from Disability Certificates (DC) forms.
  • ScanPayFormWorkflowStep - For recognition and data extraction from SEPA Payforms.

iOS:

  • SBSDKUIScanDocumentPageWorkflowStep - Specialized for capturing documents from high-res still images.
  • SBSDKUIScanBarCodeWorkflowStep - Recognition of Barcodes/QR codes on low-res video frames (live detection).
  • SBSDKUIScanMachineReadableZoneWorkflowStep - For scanning of ID cards or passports with MRZ recognition.
  • SBSDKUIScanDisabilityCertificateWorkflowStep - For recognition and data extraction from Disability Certificates (DC) forms.
  • SBSDKUIScanPayFormWorkflowStep - For recognition and data extraction from SEPA Payforms.

For more details please see the corresponding API docs of the classes as well as our example app.

Workflows Examples:

Please see our example app scanbot-sdk-example-xamarin on GitHub.

Classical UI Components#

Our Classical UI Components allows you to build your custom screens which are very flexible and fully customizable. It is a set of easy to integrate and customize components (Views, Buttons, Handlers, Controllers, etc.) which can be embedded and extended in your custom screens.

For more details please see the docs of the native SDKs for iOS and Android as well as our example app.

Classical UI Examples:

Please see our example app scanbot-sdk-example-xamarin on GitHub.

Document Detection#

The Scanbot SDK uses digital image processing algorithms to find rectangular, document like polygons in a digital image. As input a UIImage on iOS or Bitmap on Android are accepted.

Bitmap image = ...; // on AndroidUIImage image = ...; // on iOS
var detectionResult = SBSDK.DetectDocument(image);if (detectionResult.Status == DocumentDetectionStatus.Ok){  var resultImage = detectionResult.Image as Bitmap; // on Android  var resultImage = detectionResult.Image as UIImage; // on iOS  var polygon = detectionResult.Polygon;  ...}

The result is an object of type DocumentDetectionResult which contains the detection status and on success the warped and cropped document image as well as the detected polygon. If there was no document detected the status enum provides the reason (noisy background, too dark, etc). The polygon is a list with 4 float points (one for each corner). Each point has coordinates in the range [0..1], representing a position relative to the image size. For instance, if a point has the coordinates (0.5, 0.5), it means that it is located exactly in the center of the image.

Alternatively, you can pass a Uri on Android or NSUrl on iOS of the source image.

On Android:

public static DocumentDetectionResult DetectDocument(Android.Net.Uri imageUri, Android.Content.Context context)

On iOS:

public static DocumentDetectionResult DetectDocument(Foundation.NSUrl imageUrl)

Please note that NSUrl must be a valid file URL (file:///). Assets URLs (assets-library://) are not supported.

Image Filtering#

Bitmap image = ...; // on AndroidUIImage image = ...; // on iOSvar resultImage = SBSDK.ApplyImageFilter(image, ImageFilter.Binarized);

Supported image filters:

  • ImageFilter.ColorEnhanced - Optimizes colors, contrast and brightness.
  • ImageFilter.Grayscale - Grayscale filter
  • ImageFilter.Binarized - Standard binarization filter with contrast optimization. Creates an 8-bit grayscale image with mostly black or white pixels.
  • ImageFilter.ColorDocument - MagicColor filter. Fixes white balance and cleans up the background.
  • ImageFilter.PureBinarized - A filter for binarizing an image. Creates an image with pixel values set to either pure black or pure white.
  • ImageFilter.BackgroundClean - Cleans up the background and tries to preserve photos within the image.
  • ImageFilter.BlackAndWhite - Black and white filter with background cleaning. Creates an 8-bit grayscale image with mostly black or white pixels.
  • ImageFilter.OtsuBinarization - A filter for black and white conversion using OTSU binarization.
  • ImageFilter.DeepBinarization - A filter for black and white conversion primary used for low contrast documents.
  • ImageFilter.EdgeHighlight - A filter that enhances edges in low contrast documents.
  • ImageFilter.LowLightBinarization - A binarization filter primarily intended for use on low contrast documents with hard shadows.

PDF Creation#

The Scanbot SDK renders images into a PDF document and stores it as a given target file. For each image a separate page is generated.

Example code for Android:

Android.Net.Uri[] images = ...;Android.Net.Uri pdfOutputFileUri = ...;SBSDK.CreatePDF(images, pdfOutputFileUri, PDFPageSize.FixedA4);

Example code for iOS:

NSUrl[] images = ...;NSUrl pdfOutputFileUrl = ...;SBSDK.CreatePDF(images, pdfOutputFileUrl, PDFPageSize.FixedA4);

The following PDF page sizes are supported:

  • PDFPageSize.A4 - The page has the aspect ratio of the image, but is fitted within A4 size. Portrait or landscape orientation is determined by the image's aspect ratio.
  • PDFPageSize.FixedA4 - The page is A4 in size. The image is fitted and centered within the page. Portrait or landscape orientation is determined by the image's aspect ratio.
  • PDFPageSize.USLetter - The page has the aspect ratio of the image, but is fitted within US letter size. Portrait or landscape orientation is determined by the image's aspect ratio.
  • PDFPageSize.FixedUsLetter - The page is US letter in size. The image is fitted and centered within the page. Portrait or landscape orientation is determined by the image's aspect ratio.
  • PDFPageSize.Auto - For each page the best matching format (A4 or US letter) is used. Portrait or landscape orientation is determined by the image's aspect ratio.
  • PDFPageSize.AutoLocale - Each page of the result PDF will be of US letter or A4 size depending on the current locale. Portrait or landscape orientation is determined by the image's aspect ratio.
  • PDFPageSize.FromImage - Each page is as large as its image at 72 dpi.

Detect barcodes from still image#

The Scanbot SDK detects barcodes from an existing image (UIImage or Bitmap). Result format is equivalent to Barcode Scanner

Example code for Android:

var SDK = new IO.Scanbot.Sdk.ScanbotSDK(context);BarcodeScanningResult result = SDK.BarcodeDetector().DetectFromBitmap(bitmap, 0);

Example code for iOS:

var scanner = new SBSDKBarcodeScanner();SBSDKBarcodeScannerResult[] result = scanner.DetectBarCodesOnImage(image);

OCR - Optical Character Recognition#

The Scanbot SDK provide simple and convenient APIs to run Optical Character Recognition (OCR) on images.

As result you can get:

  • a searchable PDF document with the recognized text layer (aka. sandwiched PDF document)
  • recognized text as plain text,
  • bounding boxes of all recognized paragraphs, lines and words,
  • text results and confidence values for each bounding box.

The Scanbot OCR feature is based on the Tesseract OCR engine with some modifications and enhancements. The OCR engine supports a wide variety of languages. For each desired language a corresponding OCR training data file (.traineddata) must be provided. Furthermore, the special data file osd.traineddata is required (used for orientation and script detection). The Scanbot SDK package contains no language data files to keep the SDK small in size. You have to download and include the desired language files in your app.

Preconditions to achieve a good OCR result#

Conditions while scanning

A perfect document for OCR is flat, straight, in the highest possible resolution and does not contain large shadows, folds, or any other objects that could distract the recognizer. Our UI and algorithms do their best to help you meet these requirements. But as in photography, you can never fully get the image information back that was lost during the shot.

Languages

You can use multiple languages for OCR. But since the recognition of characters and words is a very complicated process, increasing the number of languages lowers the overall precision. With more languages, there are more results where the detected word could match. We suggest using as few languages as possible. Make sure that the language you are trying to detect is supported by the SDK and added to the project.

Size and position

Put the document on a flat surface. Take the photo from straight above in parallel to the document to make sure that the perspective correction does not need to be applied much. The document should fill most of the camera frame while still showing all of the text that needs to be recognized. This results in more pixels for each character that needs to be detected and hence, more detail. Skewed pages decrease the recognition quality.

Light and shadows

More ambient light is always better. The camera takes the shot at a lower ISO value, which results in less grainy photos. You should make sure that there are no visible shadows. If you have large shadows, it is better to take the shot at an angle instead. We also do not recommend using the flashlight - from this low distance it creates a light spot at the center of the document which decreases the recognition quality.

Focus

The document needs to be properly focused so that the characters are sharp and clear. The autofocus of the camera works well if you meet the minimum required distance for the lens to be able to focus. This usually starts at 5-10cm.

Typefaces

The OCR trained data is optimized for common serif and sans-serif font types. Decorative or script fonts drastically decrease the quality of the recognition.

Download and Provide OCR Language Files#

You can find a list of all supported OCR languages and download links on this Tesseract wiki page.

โš ๏ธ๏ธ๏ธ Please choose and download the proper version of the language data files:

Download the desired language files as well as the osd.traineddata file and place them in the Assets sub-folder SBSDKLanguageData/ of your Android app or in the Resources sub-folder ScanbotSDKOCRData.bundle/ of your iOS app.

Example for Android:

Droid/Assets/SBSDKLanguageData/eng.traineddata  // english language fileDroid/Assets/SBSDKLanguageData/deu.traineddata  // german language fileDroid/Assets/SBSDKLanguageData/osd.traineddata  // required special data file

Example for iOS:

iOS/Resources/ScanbotSDKOCRData.bundle/eng.traineddata  // english language fileiOS/Resources/ScanbotSDKOCRData.bundle/deu.traineddata  // german language fileiOS/Resources/ScanbotSDKOCRData.bundle/osd.traineddata  // required special data file

OCR API#

Example code for Android:

Android.Net.Uri[] images = ...;Android.Net.Uri pdfOutputFileUri = ...;OcrResult result = SBSDK.PerformOCR(images, new []{ "en", "de" }, pdfOutputFileUri);
// recognized plain textstring text = result.recognizedText;
// bounding boxes and text results of recognized paragraphs, lines and words:List<OcrResultBlock> paragraphs = result.Paragraphs;List<OcrResultBlock> lines = result.Lines;List<OcrResultBlock> words = result.Words;

See the API reference of the OcrResult class for more details.

Example code for iOS:

NSUrl[] images = ...;NSUrl pdfOutputFileUrl = ...;SBSDKOCRResult result = SBSDK.PerformOCR(images, new[] { "en", "de" }, pdfOutputFileUrl);
// recognized plain textstring text = result.recognizedText;
// bounding boxes and text results of recognized paragraphs, lines and words:SBSDKOCRResultBlock[] paragraphs = result.Paragraphs;SBSDKOCRResultBlock[] lines = result.Lines;SBSDKOCRResultBlock[] words = result.Words;

See the API reference of the SBSDKOCRResult class for more details.

Estimating Image Blurriness#

iOS#
var blur = new SBSDKBlurrinessEstimator().EstimateImageBlurriness(image);
Droid#
var estimator = new IO.Scanbot.Sdk.ScanbotSDK(this).BlurEstimator();var = estimator.EstimateInBitmap(bitmap, 0);

Less is sharper, more is blurred.

In board terms, consider blur values as follows:

  • 0.0-0.3: This image is not blurry at all
  • 0.3-0.6: Somewhat blurry, should be okay
  • 0.6-1.0: I am skeptical of the usefulness of the image

However, this is not as easy as it seems. If a scanned document has a predominantly white background, it will be considered a very blurred image. It is therefore best to use blur estimator in conjunction with a finder view or on an already cropped document.

Camera UI for document scanning#

The Scanbot SDK provides UI components for guided, automatic document scanning. These component classes handle all the camera and detection implementation details for you and can be used in your app. It is possible to customize the appearance and behavior of the guidance UI.

In the following examples we use the classes from the native SDK namespaces. Please make sure that you have defined the required permissions in your app.

iOS#

For your convenience, the Scanbot SDK for iOS comes with the SBSDKScannerViewController class that handles all the document scanner implementations. You can customize the appearance and behavior of the guidance UI via controllers delegate.

Example code:

using UIKit;using Foundation;using ScanbotSDK.iOS; // native SDK namespace
public class MyCameraViewController : UIViewController{  protected SBSDKScannerViewController scannerViewController;  protected bool viewAppeared = false;
  public override void ViewDidLoad()  {    base.ViewDidLoad();
    // Create the SBSDKScannerViewController.    // We want it to be embedded into self.    // As we do not want automatic image storage we pass null here for the image storage.    this.scannerViewController = new SBSDKScannerViewController(this, null);
    // Set the delegate to self.    this.scannerViewController.WeakDelegate = this;
    // We want unscaled images in full size:    this.scannerViewController.ImageScale = 1.0f;  }
  public override void ViewWillDisappear(bool animated)  {    base.ViewWillDisappear(animated);    this.viewAppeared = false;  }
  public override void ViewDidAppear(bool animated)  {    base.ViewDidAppear(animated);    this.viewAppeared = true;  }
  public override bool ShouldAutorotate()  {    // No autorotations    return false;  }
  public override UIInterfaceOrientationMask GetSupportedInterfaceOrientations()  {    // Only portrait    return UIInterfaceOrientationMask.Portrait;  }
  public override UIStatusBarStyle PreferredStatusBarStyle()  {    // White statusbar    return UIStatusBarStyle.LightContent;  }

  // Exports delegate  #region SBSDKScannerViewControllerDelegate
  [Export("scannerControllerShouldAnalyseVideoFrame:")]  public bool ScannerControllerShouldAnalyseVideoFrame(SBSDKScannerViewController controller)  {    // We want to only process video frames when self is visible on screen and front most view controller    return this.viewAppeared && this.PresentedViewController == null;  }
  [Export("scannerController:didCaptureDocumentImage:")]  public void ScannerControllerDidCaptureDocumentImage(SBSDKScannerViewController controller, UIImage documentImage)  {    // Here we get the perspective corrected and cropped document image after the shutter was (auto)released.    // Do whatever you want with the documentImage...  }
  [Export("scannerController:didCaptureImage:")]  public void ScannerControllerDidCaptureImage(SBSDKScannerViewController controller, UIImage image)  {    // Here we get the full image from the camera. We could run another manual detection here or use the latest    // detected polygon from the video stream to process the image with.  }
  [Export("scannerController:didDetectPolygon:withStatus:")]  public void ScannerControllerDidDetectPolygonWithStatus(SBSDKScannerViewController controller, SBSDKPolygon polygon, SBSDKDocumentDetectionStatus status)  {    // Every time the document detector finishes detection it calls this delegate method.  }
  [Export("scannerController:viewForDetectionStatus:")]  public UIView ScannerControllerViewForDetectionStatus(SBSDKScannerViewController controller, SBSDKDocumentDetectionStatus status)  {    // Here we can return a custom view that we want to use to visualize the latest detection status.    // We return null for now to use the standard label.    return null;  }
  [Export("scannerController:polygonColorForDetectionStatus:")]  public UIColor ScannerControllerPolygonColorForDetectionStatus(SBSDKScannerViewController controller, SBSDKDocumentDetectionStatus status)  {    // If the detector has found an acceptable polygon we show it in green    if (status == SBSDKDocumentDetectionStatus.Ok) {      return UIColor.Green;    }    // Otherwise red    return UIColor.Red;  }
  [Export("scannerController:localizedTextForDetectionStatus:")]  public string ScannerControllerLocalizedTextForDetectionStatus(SBSDKScannerViewController controller, SBSDKDocumentDetectionStatus status)  {    // Here you can return a localized text depending on the detection status.    // If not implemented, English standard strings are applied.    switch (status) {      case SBSDKDocumentDetectionStatus.Ok:        return "OK, don't move";      case SBSDKDocumentDetectionStatus.OK_BadAngles:        return "Bad angles...";      default:        return null;    }  }
  #endregion}
Android#

The Android camera API might seem to be very tricky and far from being developer-friendly. To help you skip the same issues which we encountered while developing Scanbot we created ScanbotCameraView. You can use it in your app's Activity or Fragment. ScanbotCameraView has a method getPreviewBuffer() which allows you to register for preview frames from the camera. While you can implement your own smart features, Scanbot SDK comes with the built-in ContourDetectorFrameHandler which performs contour detection (document detection) and outputs results to listeners. To draw detected contours, the Scanbot SDK provides the class PolygonView. To further improve the user experience you might want to automatically take a photo when a document is detected and conditions are good - we call this Autosnapping. For this purpose we have the AutoSnappingController class.

The following example demonstrates the use of all modules.

Example layout for a Scanner Camera Activity MyCameraView.axml

<?xml version="1.0" encoding="utf-8"?><FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"    xmlns:tools="http://schemas.android.com/tools"    xmlns:app="http://schemas.android.com/apk/res-auto"    android:layout_width="match_parent"    android:layout_height="match_parent"    tools:context=".ScanbotCameraActivity">    <net.doo.snap.camera.ScanbotCameraView        android:id="@+id/scanbotCameraView"        android:layout_width="match_parent"        android:layout_height="match_parent">        <net.doo.snap.ui.PolygonView            android:id="@+id/scanbotPolygonView"            android:layout_width="match_parent"            android:layout_height="match_parent"            app:polygonStrokeWidth="8dp"            app:polygonStrokeColor="@color/material_deep_teal_200"            app:polygonFillColor="#55009688" />    </net.doo.snap.camera.ScanbotCameraView>    <ImageView        android:id="@+id/scanbotResultImageView"        android:layout_width="100dp"        android:layout_height="100dp"        android:layout_gravity="bottom|right" />    <Button        android:id="@+id/scanbotSnapButton"        android:layout_width="wrap_content"        android:layout_height="wrap_content"        android:layout_gravity="bottom|center_horizontal"        android:text="Snap" />    <Button        android:id="@+id/scanbotFlashButton"        android:layout_width="wrap_content"        android:layout_height="wrap_content"        android:layout_gravity="top|right"        android:text="Flash" />    <TextView        android:id="@+id/userGuidanceTextView"        android:text=""        android:layout_width="wrap_content"        android:layout_height="wrap_content"        android:layout_gravity="center_vertical|center_horizontal" /></FrameLayout>

And the Scanner Camera Activity class:

using Android.App;using Android.Graphics;using Android.OS;using Android.Support.V4.View;using Android.Support.V7.App;using Android.Widget;
// native SDK namespaceusing Net.Doo.Snap.Camera;using Net.Doo.Snap.Lib.Detector;using Net.Doo.Snap.UI;
// Wrapper namespaceusing ScanbotSDK.Xamarin;using ScanbotSDK.Xamarin.Android.Wrapper;using ScanbotSDK.Xamarin.EventBus;
[Activity(Theme = "@style/Theme.AppCompat")]public class MyCameraActivity : AppCompatActivity, IPictureCallback, ContourDetectorFrameHandler.IResultHandler, ICameraOpenCallback{  protected ScanbotCameraView cameraView;  protected AutoSnappingController autoSnappingController;  protected bool flashEnabled = false;  protected ImageView resultImageView;  protected TextView userGuidanceTextView;

  protected override void OnCreate(Bundle savedInstanceState)  {    SupportRequestWindowFeature(WindowCompat.FeatureActionBarOverlay);    base.OnCreate(savedInstanceState);
    // Use our example view (MyCameraView.axml)    SetContentView(Resource.Layout.MyCameraView);
    SupportActionBar.Hide();
    cameraView = FindViewById<ScanbotCameraView>(Resource.Id.scanbotCameraView);    resultImageView = FindViewById<ImageView>(Resource.Id.scanbotResultImageView);
    userGuidanceTextView = FindViewById<TextView>(Resource.Id.userGuidanceTextView);
    ContourDetectorFrameHandler contourDetectorFrameHandler = ContourDetectorFrameHandler.Attach(cameraView);    PolygonView polygonView = FindViewById<PolygonView>(Resource.Id.scanbotPolygonView);    contourDetectorFrameHandler.AddResultHandler(polygonView);    contourDetectorFrameHandler.AddResultHandler(this);    autoSnappingController = AutoSnappingController.Attach(cameraView, contourDetectorFrameHandler);    // Set the sensitivity of AutoSnappingController    // Range is from 0 to 1, where 1 is the most sensitive. The more sensitive it is the faster it shoots.    autoSnappingController.SetSensitivity(1.0f);
    cameraView.AddPictureCallback(this);    cameraView.SetCameraOpenCallback(this);
    FindViewById(Resource.Id.scanbotSnapButton).Click += delegate {      cameraView.TakePicture(false);    };
    FindViewById(Resource.Id.scanbotFlashButton).Click += delegate {      cameraView.UseFlash(!flashEnabled);      flashEnabled = !flashEnabled;    };  }
  protected override void OnResume()  {    base.OnResume();    cameraView.OnResume();  }
  protected override void OnPause()  {    base.OnPause();    cameraView.OnPause();  }
  public void OnCameraOpened()  {    cameraView.PostDelayed(() => {      // Enable continuous focus mode      cameraView.ContinuousFocus();    }, 300);  }
  public bool HandleResult(ContourDetectorFrameHandler.DetectedFrame result)  {    // Here you are continuously notified about contour detection results.    // For example, you can set a localized text for user guidance depending on the detection status.
    var color = Color.Red;    var guideText = "Get closer...";
    if (result.Polygon == null || result.Polygon.Count == 0) {      guideText = "Searching for document...";    }
    if (result.DetectionResult == DetectionResult.Ok) {      guideText = "OK, don't move.";      color = Color.Green;    }    // else ...
    // Warning: The HandleResult callback is coming from a worker thread. Use main UI thread to update the UI elements.    userGuidanceTextView.Post(() => {      userGuidanceTextView.Text = guideText;      userGuidanceTextView.SetTextColor(color);    });
    return false;  }
  public void OnPictureTaken(byte[] image, int imageOrientation)  {    // Here we get the full image from the camera.
    // decode bytes as Bitmap    BitmapFactory.Options options = new BitmapFactory.Options();    //options.InSampleSize = 1; // use 1 for original size (if you want no downscale)    // to save memory for the preview image here we use smaller image (inSampleSize = 8 returns an image that is 1/8 the width/height of the original)!    options.InSampleSize = 8;    Bitmap bitmap = BitmapFactory.DecodeByteArray(image, 0, image.Length, options);
    // Run document detection on image:    var detectionResult = SBSDK.DocumentDetection(bitmap);    if (detectionResult.Status.IsOk())    {      var documentImage = detectionResult.Image as Bitmap;      // Do whatever you want with the documentImage...      resultImageView.Post(() => {        resultImageView.SetImageBitmap(documentImage);        // continue camera preview with continuous focus mode        cameraView.ContinuousFocus();        cameraView.StartPreview();      });    }  }}

Cropping UI#

The Scanbot SDK provides some smart elements like magnetic lines or magnifier to build a UI for manual cropping of images.

In the following examples we use the classes from the native SDK namespaces.

iOS#

Example code for a NavigationController:

using Foundation;using ScanbotSDK.iOS;using UIKit;
public class MyCropNavigationController : UINavigationController{  UIImage image;  SBSDKCropViewController sdkCropViewController;
  public MyCropNavigationController(UIImage image)  {    this.image = image;  }
  public override void ViewDidLoad()  {    base.ViewDidLoad();
    sdkCropViewController = new SBSDKCropViewController();    // when we set the image here, edge detection will be processed automatically by SBSDKCropViewController    sdkCropViewController.Image = this.image;    sdkCropViewController.WeakDelegate = this;
    if (sdkCropViewController.Polygon == null) {      // if no polygon was detected, we set a default polygon      sdkCropViewController.Polygon = new SBSDKPolygon(); // {0,0}, {1,0}, {1,1}, {0,1}    }
    PushViewController(sdkCropViewController, false);  }
  #region SBSDKCropViewControllerDelegate
  [Export("cropViewController:didApplyChangesWithPolygon:croppedImage:")]  public void CropViewControllerDidApplyChangesWithPolygon(SBSDKCropViewController cropViewController, SBSDKPolygon polygon, UIImage croppedImage)  {    // handle the cropped document image (croppedImage) here ...  }
  [Export("cropViewControllerDidCancelChanges:")]  public void CropViewControllerDidCancelChanges(SBSDKCropViewController cropViewController)  {    DismissViewController(true, null);  }
  [Export("cancelButtonImageForCropViewController:")]  public UIImage CancelButtonImageForCropViewController(SBSDKCropViewController cropViewController)  {    // here you can return a custom image icon for the "cancel" button in the NavigationBar    return UIImage.FromBundle("my_close_icon");  }
  [Export("applyButtonImageForCropViewController:")]  public UIImage ApplyButtonImageForCropViewController(SBSDKCropViewController cropViewController)  {    // here you can return a custom image icon for the "save/apply" button in the NavigationBar    return UIImage.FromBundle("my_apply_icon");  }
  #endregion}
Android#

Example code for an Activity:

Activity layout MyCroppingView.axml:

<?xml version="1.0" encoding="utf-8"?><FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"    xmlns:tools="http://schemas.android.com/tools"    xmlns:app="http://schemas.android.com/apk/res-auto"    android:layout_width="match_parent"    android:layout_height="match_parent"    tools:context=".MyCroppingActivity">    <net.doo.snap.ui.EditPolygonImageView        android:id="@+id/scanbotEditImageView"        android:layout_width="match_parent"        android:layout_height="match_parent"        app:cornerImageSrc="@drawable/ui_crop_corner_handle"        app:edgeImageSrc="@drawable/ui_crop_side_handle"        app:edgeColor="#0000ff" />    <net.doo.snap.ui.MagnifierView        android:id="@+id/scanbotMagnifierView"        android:layout_width="match_parent"        android:layout_height="match_parent"        app:magnifierImageSrc="@drawable/ui_crop_magnifier" /></FrameLayout>

And the Activity class:

using System;using System.Collections.Generic;using System.Threading.Tasks;using Android.App;using Android.Graphics;using Android.OS;using Android.Support.V7.App;using Android.Views;using Android.Widget;
// native SDK namespaceusing Net.Doo.Snap.Lib.Detector;using Net.Doo.Snap.UI;
[Activity(Theme = "@style/Theme.AppCompat")]public class MyCroppingActivity : AppCompatActivity{  static IList<Android.Graphics.PointF> DEFAULT_POLYGON = new List<Android.Graphics.PointF>();  static MyCroppingActivity()  {    DEFAULT_POLYGON.Add(new Android.Graphics.PointF(0, 0));    DEFAULT_POLYGON.Add(new Android.Graphics.PointF(1, 0));    DEFAULT_POLYGON.Add(new Android.Graphics.PointF(1f, 1f));    DEFAULT_POLYGON.Add(new Android.Graphics.PointF(0, 1));  }
  private Bitmap image; // original source image  private EditPolygonImageView editPolygonImageView;  private MagnifierView scanbotMagnifierView;  private View cancelBtn, saveBtn;

  protected override void OnCreate(Bundle savedInstanceState)  {    base.OnCreate(savedInstanceState);
    SetContentView(Resource.Layout.MyCroppingView);
    SupportActionBar.SetDisplayShowHomeEnabled(false);    SupportActionBar.SetDisplayShowTitleEnabled(false);    SupportActionBar.SetDisplayShowCustomEnabled(true);    SupportActionBar.SetDisplayHomeAsUpEnabled(false);    SupportActionBar.SetCustomView(Resource.Layout.ActionBarMyCroppingView);
    editPolygonImageView = FindViewById<EditPolygonImageView>(Resource.Id.scanbotEditImageView);    scanbotMagnifierView = FindViewById<MagnifierView>(Resource.Id.scanbotMagnifierView);
    cancelBtn = FindViewById<View>(Resource.Id.cancelButton);    cancelBtn.Click += delegate {      Finish();    };
    saveBtn = FindViewById<View>(Resource.Id.doneButton);    saveBtn.Click += delegate {      cropAndSaveImage();    };
    initImageView();  }
  void initImageView()  {    Task.Run(() => {      try {        image = ...; // get/load your original source image
        // it is recommended to use a resized (thumbnail) image for the ImageView and ContourDetector:        Bitmap thumbBitmap = ...; // scale your image down to a suitable size
        RunOnUiThread(() => {          // important: first set the image and then the detected polygon and lines!          editPolygonImageView.SetImageBitmap(thumbBitmap);          // set up the MagnifierView every time editPolygonImageView is set with a new image.          scanbotMagnifierView.SetupMagnifier(editPolygonImageView);        });
        // perform edge and lines detection:        ContourDetector detector = new ContourDetector();        DetectionResult detectionResult = detector.Detect(thumbBitmap);
        var polygon = DEFAULT_POLYGON;        if (detectionResult == DetectionResult.Ok || detectionResult == DetectionResult.OkButBadAngles ||          detectionResult == DetectionResult.OkButTooSmall || detectionResult == DetectionResult.OkButBadAspectRatio)        {          // get detected polygon          polygon = detector.PolygonF;        }
        RunOnUiThread(() => {          // set the detected polygon and lines:          editPolygonImageView.Polygon = polygon;          editPolygonImageView.SetLines(detector.HorizontalLines, detector.VerticalLines);        });      }      catch (Exception e) {        // error handling      }
    });  }
  void cropAndSaveImage()  {    Task.Run(() => {      try {        // crop and warp the image by detected/changed polygon:        ContourDetector detector = new ContourDetector();        Bitmap documentImage = detector.ProcessImageF(image, editPolygonImageView.Polygon, ContourDetector.ImageFilterNone);        // save the documentImage...      }      catch (Exception e) {        // error handling      }
      RunOnUiThread(() => {        Finish();      });
    });  }}

DocumentDetectionStatus enum:#

  • Ok - Document detection was successful. The detected contour looks like a valid document.
  • OkButTooSmall - Document was detected, but it does not fill the desired area in the camera viewport. The quality can be improved by moving the camera closer to the document.
  • OkButBadAngles - Document was detected, but the perspective is wrong (camera is tilted relative to the document). Quality can be improved by holding the camera directly over the document.
  • OkButBadAspectRatio - Document was detected, but it has a wrong rotation relative to the camera sensor. Quality can be improved by rotating the camera 90 degrees.
  • ErrorTooDark - Document was not found, most likely because of bad lighting conditions.
  • ErrorTooNoisy - Document was not found, most likely because there is too much background noise (maybe too many other objects on the table, or the background texture is not monotonic).
  • ErrorNothingDetected - Document was not found. It is probably not in the viewport. Usually, it does not make sense to show any information to the user at this point.