Text recognizer for performing optical character recognition(OCR) on an input image.
A text recognizer is created via
getOnDeviceTextRecognizer() or
getCloudTextRecognizer(). See the code example below.
To use on-device text recognizer:
FirebaseVisionTextRecognizer textRecognizer =
FirebaseVision.getInstance().getOnDeviceTextRecognizer();
Or use cloud text recognizer:
FirebaseVisionTextRecognizer textRecognizer =
FirebaseVision.getInstance().getCloudTextRecognizer();
To perform OCR on an image, you first need to create an instance of FirebaseVisionImage
from a ByteBuffer,
Bitmap,
etc. See FirebaseVisionImage
documentation for more details. For example, the code below creates a FirebaseVisionImage
from a Bitmap.
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
Then the code below can detect texts in the supplied FirebaseVisionImage.
Task<FirebaseVisionText> task = textRecognizer.processImage(image);
task.addOnSuccessListener(...).addOnFailureListener(...);
| @interface | FirebaseVisionTextRecognizer.RecognizerType | Recognizer types. | |
| int | CLOUD | Indicates that the recognizer is using a cloud model. |
| int | ON_DEVICE | Indicates that the recognizer is using an on-device model. |
| void |
close()
Closes the text detector and release its model resources.
|
| int |
getRecognizerType()
Gets recognizer type in terms of on-device or cloud.
|
| Task<FirebaseVisionText> |
Indicates that the recognizer is using a cloud model.
Indicates that the recognizer is using an on-device model.
Closes the text detector and release its model resources.
| IOException |
|---|
Gets recognizer type in terms of on-device or cloud.
Detects FirebaseVisionText
from a
FirebaseVisionImage. The OCR is performed asynchronously. Right now, only
the following input types are supported:
For best efficiency, create a
FirebaseVisionImage object using one of the following ways:
fromMediaImage(Image, int) with a YUV_420_888
formatted image from android.hardware.camera2.
fromByteArray(byte[], FirebaseVisionImageMetadata) with a NV21
formatted image from Camera
(deprecated).
fromByteBuffer(ByteBuffer, FirebaseVisionImageMetadata) if you need to
pre-process the image. E.g. allocate a direct ByteBuffer
and write processed pixels into the ByteBuffer.
FirebaseVisionImage factory methods will work as well, but possibly slightly
slower.
Task
for FirebaseVisionText.