Detector for finding FirebaseVisionFaces
in a supplied image.
A face detector is created via
getVisionFaceDetector(FirebaseVisionFaceDetectorOptions) or
getVisionFaceDetector(), if you wish to use the default options. For example, the
code below creates a face detector with default options.
FirebaseVisionFaceDetector faceDetector =
FirebaseVision.getInstance().getVisionFaceDetector();
To perform face detection in an image, you first need to create an instance of
FirebaseVisionImage from a Bitmap,
ByteBuffer, etc. See
FirebaseVisionImage
documentation for more details. For example, the code below creates a FirebaseVisionImage
from a Bitmap.
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);Then
the code below can detect faces in the supplied FirebaseVisionImage.
Task<List<FirebaseVisionFace>> task = faceDetector.detectInImage(image);
task.addOnSuccessListener(...).addOnFailureListener(...);
| void |
close()
Closes this
FirebaseVisionFaceDetector and releases its model resources.
|
| Task<List<FirebaseVisionFace>> |
Closes this
FirebaseVisionFaceDetector and releases its model resources.
| IOException |
|---|
Detects human faces from the supplied image.
For best efficiency, create a
FirebaseVisionImage object using one of the following ways:
fromMediaImage(Image, int) with a YUV_420_888
formatted image from android.hardware.camera2.
fromByteArray(byte[], FirebaseVisionImageMetadata) with a NV21
formatted image from Camera
(deprecated).
fromByteBuffer(ByteBuffer, FirebaseVisionImageMetadata) if you need to
pre-process the image. E.g. allocate a direct ByteBuffer
and write processed pixels into the ByteBuffer.
FirebaseVisionImage factory methods will work as well, but possibly slightly
slower.
Note that the width of the provided image cannot be less than 32 if
ALL_CONTOURS is specified.
Task
that asynchronously returns a List of detected
FirebaseVisionFaces.