Skip to main content

Classic UI

Scanning UI

The Scanbot SDK comes with an essential camera view, additional views for extending the camera functionality and a set of the frame handler and detector classes that handle all the camera and detection implementation details for you. It provides a UI for document scanning guidance as well as a UI and functionality for manual and automatic shutter release.

The ScanbotCameraXView provides fully customizable camera controls and features. Furthermore, ContourDetectorFrameHandler gives control over how and when frames are analyzed. And, most importantly, ContourDetector and ImageProcessor perform document detection, perspective correction, cropping and filtering of the document images.

Integration

  • Classic UI component
    • ScanbotCameraXView (more details are here)
    • PolygonView (more details are here)
    • ShutterButton
    • DocumentAutoSnappingController (more details are here)
    • ContourDetectorFrameHandler (more details are here)
    • FinderOverlayView / AdaptiveFinderOverlayView / ZoomFinderOverlayView (more details are here)

Take a look at our Example Apps to see how to integrate the Document Scanner.

Add Feature as a Dependency

The Document Scanner is available with SDK Package 1. You have to add the following dependency for it:

implementation("io.scanbot:sdk-package-1:$latestSdkVersion")

Initialize the SDK

The Scanbot SDK must be initialized before use. Add the following code snippet to your Application class:

import io.scanbot.sdk.ScanbotSDKInitializer

class ExampleApplication : Application() {

override fun onCreate() {
super.onCreate()

// Initialize the Scanbot Scanner SDK:
ScanbotSDKInitializer().initialize(this)
}
}
caution

Unfortunately, we have noticed that all devices using a Cortex A53 processor DO NOT SUPPORT GPU acceleration. If you encounter any problems, please disable GPU acceleration for these devices.

ScanbotSDKInitializer()
.allowGpuAcceleration(false)

The Android camera API might seem very tricky and far from being developer-friendly (in fact, very far). To help you avoid the same issues which we have encountered while developing the Scanbot SDK, we created the ScanbotCameraXView.

Getting Started

First of all, you have to add the SDK package and feature dependencies as described in the begining.

Then initialize the SDK.

ScanbotCameraView is available with SDK Package 1. Based on camera V1 old api implementation. This component is now deprecated by us and will be removed in the future. ScanbotCameraXView is also available with SDK Package 1. Based on Android CameraX (api V2) implementation.

To get started, you have to undertake 3 steps.

First: Add this permission to the AndroidManifest.xml

<uses-permission android:name="android.permission.CAMERA" />

Second: Add ScanbotCameraXView to your layout, which is as simple as:

<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/camera"
android:layout_width="match_parent"
android:layout_height="match_parent" />

Third: Delegate onResume and onPause methods of your Activity (or Fragment, whatever you are using) to ScanbotCameraView (ScanbotCameraXView does not need this as it is bound to a lifecycle):

class MyActivity : Activity() {

...

override fun onResume() {
super.onResume()
scanbotCameraView.onResume()
}

override fun onPause() {
scanbotCameraView.onPause()
super.onPause()
}
}

You can start your app, and you should see the camera preview.

Preview Mode

The ScanbotCameraXView supports 2 preview modes:

  • CameraPreviewMode.FIT_IN - in this mode the camera preview frames will be downscaled to the layout view size. Full preview frame content will be visible, but unused edges might appear in the preview layout.
  • CameraPreviewMode.FILL_IN - in this mode the camera preview frames fill the layout view. The preview frames may contain additional content at the edges that is not visible in the preview layout.

By default, ScanbotCameraXView uses FILL_IN mode. You can change it using the cameraView.setPreviewMode(CameraPreviewMode.FIT_IN) method.

Auto-focus Sound and Shutter Sound

You can enable/disable auto-focus sounds and/or shutter sounds using setters in ScanbotCameraXView.

cameraView.setCameraOpenCallback(object : CameraOpenCallback {
override fun onCameraOpened() {
cameraView.postDelayed({
cameraView.setAutoFocusSound(false)
cameraView.setShutterSound(false)
}, 700)
}
})

cameraView.setShutterSound(boolean enabled) sets the camera shutter sound state. By default - true, the camera plays the system-defined camera shutter sound when takePicture() is called.

Note that devices may not always allow disabling the camera shutter sound. If the shutter sound state cannot be set to the desired value, this method will be ignored (link).

Continuous Focus Mode

For most use cases it is recommended to enable the "Continuous Focus Mode" of the Camera. Use the continuousFocus() method of ScanbotCameraXView for this. It should be called from the main thread and only when the camera is opened (CameraOpenCallback):

cameraView.setCameraOpenCallback(object : CameraOpenCallback {
override fun onCameraOpened() {
cameraView.postDelayed({
cameraView.continuousFocus()
}, 700)
}
})

Please note: The Continuous Focus Mode will be automatically disabled after:

  • autoFocus method call;
  • a tap on the ScanbotCameraXView to perform auto focus;
  • takePicture event.

In these cases you have to call the continuousFocus() method again to re-enable the Continuous Focus Mode.

Example for the takePicture event, handled in the onPictureTaken(..) method of PictureCallback:

override fun onPictureTaken(image: ByteArray, captureInfo: CaptureInfo) {
// image processing ...
// ...

cameraView.post {
cameraView.continuousFocus()
cameraView.startPreview()
}
}

Auto Focus Troubleshooting

If there is a case where the camera snaps a document (barcode, etc) before auto focus has ended properly, consider checking the delayAfterFocusComplete property of the camera view to make the camera wait before snapping after the core component has been notified that auto focus has ended.

Orientation Lock

By default the ScanbotCameraXView will create pictures with their orientation based on the current device orientation. It is important to understand that the orientation of the taken picture is independent of the locked orientation mode of the Activity!

For example: if you just lock the Activity to portrait mode, the orientation of the taken image will still be based on the current device orientation!

Since version 1.31.1 the Scanbot SDK provides the functionality to apply a real orientation lock in ScanbotCameraXView. You can use the new methods cameraView.lockToLandscape(lockPicture: Boolean) or cameraView.lockToPortrait(lockPicture: Boolean) to lock the Activity and the taken picture to a desired orientation.

Front Facing Camera

The Scanbot SDK provides an ability to set up a front facing camera as a source for the preview content. To enable it you have to set a front facing camera mode with the method setCameraModule(cameraModule: CameraModule) in ScanbotCameraXView. By default - CameraModule.BACK.

Possible options here include:

  • CameraModule.BACK - the default back facing camera will be used.
  • CameraModule.FRONT - the default front facing camera will be used. The visual preview on the screen and buffer byte array in all FrameHandlers will be mirrored, but snapped pictures will be in their original state.

ScanbotCameraXView also supports changing the camera module on runtime:

cameraView.setCameraModule(CameraModule.FRONT);
cameraView.restartPreview();
caution

ScanbotCameraXView now only supports setting the camera module configuration before starting the camera preview! Due to legacy issues, it is not possible to change the camera module once the camera preview has started.

Advanced: Preview Size and Picture Size

By default the ScanbotCameraXView selects the best available picture size (resolution of the taken picture) and a suitable preview size (preview frames).

You can change these values using the setter methods of ScanbotCameraXView:

cameraView.setCameraOpenCallback {
cameraView.stopPreview()

val supportedPictureSizes = cameraView.supportedPictureSizes
// For demo purposes we just take the first picture size from the supported list!
cameraView.setPictureSize(supportedPictureSizes[0])

val supportedPreviewSizes = cameraView.supportedPreviewSizes
// For demo purposes we just take the first preview size from the supported list!
cameraView.setPreviewFrameSize(supportedPreviewSizes[0])

cameraView.startPreview()
}
caution

Please take the following into account when changing these values: on most devices the aspect ratio of the camera sensor (camera picture) does not match the aspect ratio of the display.

Using the flashlight

It is possible to control the state of the camera's flashlight using the following method of ScanbotCameraXView:

cameraView.useFlash(enabled)

To get access to the current state of the flashlight use:

val state = cameraView.isFlashEnabled()

Detecting and drawing contours

After you have set up the ScanbotCameraXView the next logical step would be to start using contour detection and draw the results on the screen.

Contour detection

To start contour detection, you have to attach the ContourDetectorFrameHandler to the preview buffer:

val detector: ContourDetector = ScanbotSDK(context).createContourDetector()
val cameraView = findViewById<ScanbotCameraXView>(R.id.cameraView)

val frameHandler = ContourDetectorFrameHandler(context, detector)
cameraView.previewBuffer.addFrameHandler(frameHandler)

or even shorter

val detector: ContourDetector = ScanbotSDK(context).createContourDetector()
val frameHandler = ContourDetectorFrameHandler.attach(cameraView, detector)

At this point, the contour detection becomes active. Now all we have to do is wait for the results:

frameHandler.addResultHandler(ContourDetectorFrameHandler.ResultHandler { result ->
when (result) {
is FrameHandlerResult.Success -> {
// handle result here result.value.detectionResult
}
is FrameHandlerResult.Failure -> {
// there is a license problem that needs to be handled
}
}
false
})
Contour detection parameters

You can easily control the contour detection sensitivity by modifying the optional parameters in ContourDetectorFrameHandler:

    val detector: ContourDetector = ScanbotSDK(context).createContourDetector()
val frameHandler = ContourDetectorFrameHandler.attach(cameraView, detector)
frameHandler.setAcceptedAngleScore(75.0)
frameHandler.setAcceptedSizeScore(80.0)

setAcceptedAngleScore(acceptedAngleScore: Double) - set the minimum score in percentage (0 - 100) of the perspective distortion to accept a detected document. The default value is 75.0. You can set lower values to accept more perspective distortion.

Warning: Lower values can result in document images which are more blurred.

setAcceptedSizeScore(acceptedSizeScore : Double) - set the minimum size in percentage (0 - 100) of the screen size to accept a detected document. It is sufficient that either the height or the width match the score. The default value is 80.0.

Warning: Lower values can result in lower resolution document images.

Drawing detected contour

To draw the detected contour use PolygonView. First, add it as a sub-view of ScanbotCameraXView:

<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/cameraView"
android:layout_width="match_parent"
android:layout_height="match_parent">

<io.scanbot.sdk.ui.PolygonView
android:id="@+id/polygonView"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:polygonStrokeWidth="8dp"
app:polygonStrokeColor="#ffffff"
app:polygonFillColor="#00ff00" />

</io.scanbot.sdk.ui.camera.ScanbotCameraXView>

Second, PolygonView should receive callbacks from ContourDetectorFrameHandler:

val polygonView = findViewById<PolygonView>(R.id.polygonView)
frameHandler.addResultHandler(polygonView.contourDetectorResultHandler)
Customizing drawn polygon

PolygonView supports the following attributes (which you can add in XML, as shown in the example above):

  • polygonStrokeWidth - the width (thickness) of the polygon lines
  • polygonStrokeColor - the color of the polygon lines
  • polygonFillColor - the fill color of the polygon
  • polygonStrokeColorOK - the color of the polygon lines when detection is successful (optional)
  • polygonFillColorOK - the fill color of the polygon when detection is successful (optional)
  • polygonAutoSnapStrokeWidth - the width of the autosnapping polygon progress indicator (default 3dp)
  • polygonAutoSnappingProgressStrokeColor - the color of the autosnapping polygon progress indicator (default is white)
  • polygonRoundedCornersRadius - the rounded corner radius of the polygon (0 by default)
  • drawShadow - whether the polygon stroke should cast a shadow (since Android API v26) (default is false)

User Guidance

To improve both the end user's experience and scanning quality you can add visual guidance to the scanning UI.

This will help the user understand the desired positioning, orientation, and size of the scanned document or the QR/barcodes in the camera preview and take care of the preliminary processing to improve the results.

General idea

In your layout put your finder view on top of ScanbotCameraXView within the same parent, and specify its id to ScanbotCameraXView using app:finder_view_id="@id/my_finder_view_id" attribute:

<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/cameraView"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:finder_view_id="@id/my_finder_view_id" />

<io.scanbot.sdk.ui.camera.FinderOverlayView
android:id="@+id/my_finder_view_id"
android:layout_width="match_parent"
android:layout_height="match_parent" />

Alternatively, you can just set id as android:id="@+id/finder_overlay" for your finder view and ScanbotCameraXView will find it automatically.

Please note the following limitations when using finder view:

  • the parent view should not have any padding. ScanbotCameraXView should have android:layout_width="match_parent" and android:layout_height="match_parent" layout parameters and no padding or margins;
  • the "Finder Overlay" view can have any margins, size, background or even child views, but it should always be over the camera preview frame, otherwise it will throw an IllegalStateException.

Not only will this direct the user's scanning process, but also the FrameHandler (attached to the given ScanbotCameraXView) will receive a non-null FrameHandler.Frame.finderRect object that will represent the frame area within the view finder's bounds. That can later be used, for example, for other SDK components that accept finder rect.

To start with: bare android.view.View (full customization)

In case you want full control over the look and feel of the view finder - you can use any android.view.View subclass as a finder view. Take this example:

<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:finder_view_id="@id/my_finder_view" />

<View
android:id="@+id/my_finder_view"
android:layout_width="match_parent"
android:layout_height="100dp"
android:layout_gravity="bottom"
android:layout_marginLeft="20dp"
android:layout_marginRight="20dp"
android:layout_marginBottom="200dp"
android:background="@drawable/finder_view_container_bg" />

where @drawable/finder_view_container_bg is your xml-drawable with bounds outline. The result might look like this:

this.

FinderOverlayView - ready-to-use solution

Instead of plain android.view.View you can use our specially made FinderOverlayView class. It handles all the hassle and leaves you with just a little bit of styling. Take this example:


<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:finder_view_id="@id/my_finder_view"/>

<io.scanbot.sdk.ui.camera.FinderOverlayView
android:id="@+id/my_finder_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>

This component allows the following customizations:

  • overlay_color - the color of the area outside the finder view gap
  • overlay_stroke_color - the color of the finder view border
  • stroke_width - the width of the finder view border
  • sbsdk_corner_radius - the radius for rounded corners of the finder view border
  • min_padding - the minimum space between the finder view and the screen borders
  • fixed_width - the finder view's fixed width
  • fixed_height - the finder view's fixed height
  • max_size - maximum size of the longer side, when fixed sizes are not set

Alternatively, if you do not want to specify a fixed width and height, you can programmatically set the desired finder view aspect ratio. It will then take all the available screen space, respecting given aspect ratio and padding:

val requiredPageAspectRatios = listOf(FinderAspectRatio(21.0, 29.7)) // ~ A4 page size

...

val finderOverlayView = findViewById<FinderOverlayView>(R.id.finder_overlay_view)
finderOverlayView.setRequiredAspectRatios(requiredPageAspectRatios)

The result might look like this:

this.

To set the padding from the edge of the preview (this means that the padding will not be calculated from the edge of the screen, but rather from the edge of the preview itself), use finderInsets API. Check both CameraPreviewMode.FIT_IN and CameraPreviewMode.FILL_IN to see the difference. To set all insets:

val finderOverlayView = findViewById<FinderOverlayView>(R.id.finder_overlay_view)
finderOverlayView.finderInsets = Insets.of(50, 200, 50, 0)
...

To set one inset:

val finderOverlayView = findViewById<FinderOverlayView>(R.id.finder_overlay_view)
finderOverlayView.setFinderInset(right=50)
...

There is also an option to create a safe area for the finder. This means that if some part of the preview is in this area, the finder will be moved out of this zone. For example, you can set the top safe area inset as the height of the toolbar to prevent your finder appearing behind the toolbar, even if the camera and finder layouts are placed behind the toolbar in the view stack.

To set all safe area insets:

val finderOverlayView = findViewById<FinderOverlayView>(R.id.finder_overlay_view)
finderOverlayView.safeAreaInsets = Insets.of(0, 200, 0, 0)
...

To set one safe area inset:

val finderOverlayView = findViewById<FinderOverlayView>(R.id.finder_overlay_view)
finderOverlayView.setSafeAreaInsets(top=200)
...

AdaptiveFinderOverlayView - for range of desired aspect ratios

In case you are scanning different documents with different acceptable aspect ratios, but still want to preserve the logic of having a pre-selected rectangle of the document - you might use AdaptiveFinderOverlayView.

Take this example:


<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/camera_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:finder_view_id="@id/my_finder_view"/>

<io.scanbot.sdk.ui.camera.AdaptiveFinderOverlayView
android:id="@+id/my_finder_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>

AdaptiveFinderOverlayView uses ContourDetectorFrameHandler in its logic, so we also need to set that up:

val cameraView = findViewById<ScanbotCameraXView>(R.id.camera_view)
val finderOverlayView = findViewById<AdaptiveFinderOverlayView>(R.id.my_finder_view)

// we can use several aspect ratios:
val pageAspectRatios = listOf( // this will be used for ContourDetectorFrameHandler
PageAspectRatio(21.0, 29.7), // a4 sheet size
PageAspectRatio(85.60, 53.98)) // credit card size
val finderAspectRatios = pageAspectRatios.toFinderAspectRatios() // for AdaptiveFinderOverlayView
myFinderOverlayView.setRequiredAspectRatios(finderAspectRatios)

val contourDetectorFrameHandler = ContourDetectorFrameHandler.attach(cameraView, scanbotSDK.createContourDetector())
contourDetectorFrameHandler.setRequiredAspectRatios(pageAspectRatios)
contourDetectorFrameHandler.addResultHandler(finderOverlayView.contourDetectorFrameHandler)

Now during live detection the finder view will adjust its borders to a detected document if it complies to one of the aspect ratios set.

Inserting views into finder sections

It is now possible to correctly layout some content within the finder. Both AdaptiveFinderOverlayView and FinderOverlayView have 3 sections where it is possible to insert a view. All inserted views can be matched to the size of the section, so it is possible to build custom constraints inside each of them. For this we need to place your view in xml with a special id.

First, just add it to the view hierarchy as you would do with any other finder view mentioned above:

  • finder_top_placeholder - for a container that will be placed above the finder frame.
  • finder_center_placeholder - for a container that will be placed inside the finder frame.
  • finder_bottom_placeholder - for a container that will be placed below the finder frame.
<io.scanbot.sdk.ui.camera.ScanbotCameraXView
android:id="@+id/cameraView"
android:layout_width="match_parent"
android:layout_height="match_parent"/>

<io.scanbot.sdk.ui.camera.FinderOverlayView
android:id="@+id/my_finder_view"
android:layout_width="match_parent"
android:layout_height="match_parent">

<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/finder_top_placeholder"
android:layout_width="match_parent"
android:layout_height="match_parent"/>

<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/finder_center_placeholder"
android:layout_width="match_parent"
android:layout_height="match_parent"/>

<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/finder_bottom_placeholder"
android:layout_width="match_parent"
android:layout_height="match_parent"/>

</io.scanbot.sdk.ui.camera.FinderOverlayView>

All 3 zones in finder, colored.

It is also possible to add views programmatically by calling:


finderOverlayView.addTopPlaceholder(view)
finderOverlayView.addBottomPlaceholder(view)
finderOverlayView.addFinderPlaceholder(view)

Autosnapping

To further improve the user experience, you might want to automatically take a photo when a document is detected and conditions are good - we call this Auto Snapping.

How to use it

It is easy: just attach DocumentAutoSnappingController to the camera like in the following example:

val contourDetector = ScanbotSDK(context).createContourDetector()
val contourDetectorFrameHandler = ContourDetectorFrameHandler.attach(cameraView, contourDetector)
val autoSnappingController = DocumentAutoSnappingController.attach(cameraView, contourDetectorFrameHandler)

And you're done. Now the camera will automatically take photos when the underlying conditions are met.

Sensitivity

You can control the Auto Snapping speed by setting the sensitivity parameter in DocumentAutoSnappingController.

autoSnappingController.setSensitivity(1f)

Note: the higher the sensitivity, the faster the snap triggers. Sensitivity must be within [0..1] range. A value of 1.0 triggers snapping immediately, whereas a value of 0.0 delays the snapping by 3 seconds.

The default value is 0.66 (1 sec)

Autosnapping visualization

Scanbot SDK provides the functionality to visualize an auto snapping process. It is implemented with the io.scanbot.sdk.ui.PolygonView animation. To enable this animation user has to attach the PolygonView as a IAutoSnappingController.AutoSnappingStateListener to the DocumentAutoSnappingController (or to the GenericDocumentAutoSnappingController):

val autoSnappingController = DocumentAutoSnappingController.attach(scanbotCameraView, contourDetector)
autoSnappingController.stateListener = polygonView

PolygonView will start the animation as soon as a contour detector will return the detection status OK. And will finish it as soon as the snap will be triggered.

Handling contour detection results

You can handle the contour detection results using ContourDetectorFrameHandler#addResultHandler. It might be useful if you want to guide your user through the snapping process by, for instance, displaying respective icons and status messages.

contourDetectorFrameHandler.addResultHandler(ContourDetectorFrameHandler.ResultHandler { result ->
when (result) {
is FrameHandlerResult.Success -> {
// handle result here result.value.detectionResult
}
is FrameHandlerResult.Failure -> {
// there is a license problem that needs to be handled
}
}
false
})
caution

This callback is coming from the worker thread. You need to move execution to the main thread before updating the UI.

On each frame you will get a DetectedFrame object which contains the results of the contour detection. One of the most important fields here is detectionResult which is basically the status of the contour detection. Possible values for this status are:

  • OK - contour detection was successful. The detected contour looks like a valid document. This is a good time to take a picture.
  • OK_BUT_TOO_SMALL - a document was detected, but it takes up too little of the camera viewport area. Quality can be improved by moving the camera closer to the document.
  • OK_BUT_BAD_ANGLES - a document was detected, but the perspective is wrong (camera is tilted relative to the document). Quality can be improved by holding the camera directly over the document.
  • OK_BUT_BAD_ASPECT_RATIO - a document was detected, but it has the wrong rotation relative to the camera sensor. Quality can be improved by rotating the camera by 90 degrees.
  • OK_OFF_CENTER - a document was detected, but it is off-center.
  • ERROR_TOO_DARK - a document was not found, most likely because of bad lighting conditions.
  • ERROR_TOO_NOISY - a document was not found, most likely because there is too much background noise (maybe too many other objects on the table, or the background texture is too complex).
  • ERROR_NOTHING_DETECTED - a document was not found. The document is probably not in the viewport. Usually it does not makes sense to show any information to the user at this point.

Handling camera picture

Once a picture has been taken, whether automatically by the Auto Snapping feature or manually by the user, you have to handle the image data by implementing the method abstract fun onPictureTaken(image: ByteArray, captureInfo: CaptureInfo) of the PictureCallback class. In this method you receive the image byte array of the original picture data and the image orientation value.

It is important to understand that this image data represents the original picture and not the cropped document image.

To get the cropped document image, you have to perform document contour detection on the original image and apply the cropping operation by using the ContourDetector class:

// Create one instance per screen
val detector: ContourDetector = ScanbotSDK(context).createContourDetector()

//...

cameraView.addPictureCallback(object : PictureCallback() {
override fun onPictureTaken(image: ByteArray, captureInfo: CaptureInfo) {
fun restartCamera() {
// Continue with the camera preview to scan the next image:
cameraView.post {
cameraView.continuousFocus()
cameraView.startPreview()
}
}

// Decode image byte array to Bitmap, and rotate according to orientation:
val bitmap = ImageProcessor(image).rotate(captureInfo.imageOrientation).processedBitmap()

if (bitmap == null) {
// license or feature is not available
restartCamera()
return
}

// Run document contour detection on original image:
detector.detect(bitmap)
val detectedPolygon = detector.polygonF
if (detectedPolygon != null) {
// And crop using detected polygon to get the final document image:
val documentImage = ImageProcessor(bitmap).crop(detectedPolygon).processecdBitmap()

// Work with the final document image (store it as a file, etc)
// ...

restartCamera()
}
}
})
Handling the image orientation

The value of the captureInfo.imageOrientation parameter requires a special handling on some Android devices. It represents the image orientation based on the current device orientation. On most Android devices the value of captureInfo.imageOrientation will be 0, but on some devices (like most Samsung devices) the value will be 90. You have to handle this value accordingly and rotate the original image. See the example code above or our example app.

FinderPictureCallback

The user has an ability to use an "advanced" version of PictureCallback - FinderPictureCallback, that binds the FinderOverlayView feature, ImageProcessor and the camera snapping mechanism. This callback automatically crops a part of the snapped image which is visible in the Finder view and rotates the cropped image according to the imageOrientation value.

To instantiate FinderPictureCallback the user has to pass an instance of ImageProcessor to the FinderPictureCallback's constuctor. As the result, the user will get an image in a Bitmap format in the abstract fun onPictureTaken(image: Bitmap?, captureInfo: CaptureInfo) method of the FinderPictureCallback.

val scanbotSDK = ScanbotSDK(this)
cameraView.addPictureCallback(object : FinderPictureCallback() {
override fun onPictureTaken(image: Bitmap?, captureInfo: CaptureInfo) {
// Work with the final image (store it as file, etc)
// ...

// Continue with the camera preview to scan the next image:
cameraView.post {
cameraView.continuousFocus()
cameraView.startPreview()
}
}
})

Cropping UI

The classic component of the cropping functionality is presented by 2 main views - EditPolygonImageView and MagnifierView. They provide the ability to precisely modify the adjustable cropping polygon on the screen.

To integrate the classic component of the Cropping feature you can take a look at our Cropping Example App or check the following step-by-step integration instructions.

Add feature dependencies and initialize the SDK

Add EditPolygonImageView to your layout


<io.scanbot.sdk.ui.EditPolygonImageView
android:layout_width="match_parent"
android:layout_height="match_parent"
app:edgeColor="#00cea6"
app:cornerImageSrc="@drawable/ui_crop_corner_handle"
app:edgeImageSrc="@drawable/ui_crop_side_handle"
app:editPolygonHandleSize="48dp"
app:magneticLineTreshold="10dp"
app:editPolygonStrokeWidth="3dp"/>

The customizable attributes here are:

  • edgeColor - the color of the polygon line. Default is undefined - meaning no color.
  • cornerImageSrc - the image to be used as the polygon corner handle
  • edgeImageSrc - the image to be used as the edge handle
  • editPolygonHandleSize - defines the touchable area size for the polygon edge and corner handles. Default is 48dp.
  • editPolygonStrokeWidth - the width of the polygon line. Default is 3dp.
  • magneticLineTreshold - the edge should be this close to the magnetic line to snap in place. Default is 10dp.
  • edgeColorOnLine - the color of the edge when it aligns with the detected magnetic line. Default is undefined - meaning no color change.

Set the detected polygon

The default polygon for EditPolygonImageView can be received from ContourDetector. ContourDetector always contains the latest detected contours information like lines and polygons. After the first detection you can set the latest detected contour to EditPolygonImageView.

val detector = ScanbotSDK(context).createContourDetector()
detector.detect(image)
editPolygonView.polygon = detector.polygonF

EditPolygonImageView supports the magnetic lines feature. For this you have to set the detected horizontal and vertical lines:

editPolygonView.setLines(detector.horizontalLines, detector.verticalLines)

Add magnifying glass

EditPolygonImageView supports a magnifying glass feature. To enable it, you should add io.scanbot.sdk.ui.MagnifierView to your custom layout.

<io.scanbot.sdk.ui.MagnifierView
android:id="@+id/magnifier"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:magnifierImageSrc="@drawable/ui_crop_magnifier"
app:magnifierRadius="36dp"
app:magnifierMargin="16dp"
app:magnifierEnableBounding="true"/>

The customizable attributes here are:

  • magnifierImageSrc - the magnifier image. To be used as a mask - make sure it has a transparent section to see the magnified content. No default value.
  • magnifierRadius - the magnifier's size. Default is 36dp.
  • magnifierMargin - the magnifier's margin (distance to the screen border). Default is 16dp.
  • magnifierEnableBounding - bounce the magnifier to the opposite part of screen when editing the polygon's corner. Default is true.
Important

You should set up the MagnifierView every time editPolygonView is set with a new image:

magnifierView.setupMagnifier(editPolygonView)

Get the selected polygon

If you want to get a selected polygon from EditPolygonImageView just call:

val currentPolygon = editPolygonView.polygon

Want to scan longer than one minute?

Generate a free trial license to test the Scanbot SDK thoroughly.

Get your free Trial License

What do you think of this documentation?


On this page

Scroll to top