Transform output
Stay organized with collections
Save and categorize content based on your preferences.
The output of a CameraX use case is twofold: the buffer and the transformation
info. The buffer is a byte array and the transformation info is how the buffer
should be cropped and rotated before being shown to end users. How to apply the
transformation depends on the format of the buffer.
ImageCapture
For the ImageCapture
use case, the crop rect buffer is applied before saving
to disk and the rotation is saved in the Exif data. There is no additional
action needed from the app.
Preview
For the Preview
use case, you can get the transformation information by
calling
SurfaceRequest.setTransformationInfoListener()
.
Every time the transformation is updated, the caller receives a new
SurfaceRequest.TransformationInfo
object.
How to apply the transformation information depends on the source of the
Surface
, and is usually non-trivial. If the goal is to simply display the
preview, use PreviewView
. PreviewView
is a custom view that automatically
handles transformation. For advanced uses, when you need to edit the preview
stream, such as with OpenGL, look at the code sample in the CameraX core test
app.
Transform coordinates
Another common task is to work with the coordinates instead of the buffer, such
as drawing a box around the detected face in preview. In cases such as this, you
need to transform the coordinates of the detected face from image analysis to
preview.
The following code snippet creates a matrix that maps from image analysis
coordinates to PreviewView
coordinates. To transform the (x, y) coordinates
with a Matrix
, see
Matrix.mapPoints()
.
Kotlin
fun getCorrectionMatrix(imageProxy: ImageProxy, previewView: PreviewView) : Matrix {
val cropRect = imageProxy.cropRect
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
val matrix = Matrix()
// A float array of the source vertices (crop rect) in clockwise order.
val source = floatArrayOf(
cropRect.left.toFloat(),
cropRect.top.toFloat(),
cropRect.right.toFloat(),
cropRect.top.toFloat(),
cropRect.right.toFloat(),
cropRect.bottom.toFloat(),
cropRect.left.toFloat(),
cropRect.bottom.toFloat()
)
// A float array of the destination vertices in clockwise order.
val destination = floatArrayOf(
0f,
0f,
previewView.width.toFloat(),
0f,
previewView.width.toFloat(),
previewView.height.toFloat(),
0f,
previewView.height.toFloat()
)
// The destination vertexes need to be shifted based on rotation degrees. The
// rotation degree represents the clockwise rotation needed to correct the image.
// Each vertex is represented by 2 float numbers in the vertices array.
val vertexSize = 2
// The destination needs to be shifted 1 vertex for every 90° rotation.
val shiftOffset = rotationDegrees / 90 * vertexSize;
val tempArray = destination.clone()
for (toIndex in source.indices) {
val fromIndex = (toIndex + shiftOffset) % source.size
destination[toIndex] = tempArray[fromIndex]
}
matrix.setPolyToPoly(source, 0, destination, 0, 4)
return matrix
}
Java
Matrix getMappingMatrix(ImageProxy imageProxy, PreviewView previewView) {
Rect cropRect = imageProxy.getCropRect();
int rotationDegrees = imageProxy.getImageInfo().getRotationDegrees();
Matrix matrix = new Matrix();
// A float array of the source vertices (crop rect) in clockwise order.
float[] source = {
cropRect.left,
cropRect.top,
cropRect.right,
cropRect.top,
cropRect.right,
cropRect.bottom,
cropRect.left,
cropRect.bottom
};
// A float array of the destination vertices in clockwise order.
float[] destination = {
0f,
0f,
previewView.getWidth(),
0f,
previewView.getWidth(),
previewView.getHeight(),
0f,
previewView.getHeight()
};
// The destination vertexes need to be shifted based on rotation degrees.
// The rotation degree represents the clockwise rotation needed to correct
// the image.
// Each vertex is represented by 2 float numbers in the vertices array.
int vertexSize = 2;
// The destination needs to be shifted 1 vertex for every 90° rotation.
int shiftOffset = rotationDegrees / 90 * vertexSize;
float[] tempArray = destination.clone();
for (int toIndex = 0; toIndex < source.length; toIndex++) {
int fromIndex = (toIndex + shiftOffset) % source.length;
destination[toIndex] = tempArray[fromIndex];
}
matrix.setPolyToPoly(source, 0, destination, 0, 4);
return matrix;
}
Content and code samples on this page are subject to the licenses described in the Content License. Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.
Last updated 2025-08-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[],[],null,["# Transform output\n\nThe output of a CameraX use case is twofold: the buffer and the transformation\ninfo. The buffer is a byte array and the transformation info is how the buffer\nshould be cropped and rotated before being shown to end users. How to apply the\ntransformation depends on the format of the buffer.\n\nImageCapture\n------------\n\nFor the `ImageCapture` use case, the crop rect buffer is applied before saving\nto disk and the rotation is saved in the Exif data. There is no additional\naction needed from the app.\n\nPreview\n-------\n\nFor the `Preview` use case, you can get the transformation information by\ncalling\n[`SurfaceRequest.setTransformationInfoListener()`](/reference/androidx/camera/core/SurfaceRequest#setTransformationInfoListener(java.util.concurrent.Executor,%20androidx.camera.core.SurfaceRequest.TransformationInfoListener)).\nEvery time the transformation is updated, the caller receives a new\n[`SurfaceRequest.TransformationInfo`](/reference/androidx/camera/core/SurfaceRequest.TransformationInfo)\nobject.\n\nHow to apply the transformation information depends on the source of the\n`Surface`, and is usually non-trivial. If the goal is to simply display the\npreview, use `PreviewView`. `PreviewView` is a custom view that automatically\nhandles transformation. For advanced uses, when you need to edit the preview\nstream, such as with OpenGL, look at the code sample in the [CameraX core test\napp](https://android.googlesource.com/platform/frameworks/support/+/refs/heads/androidx-main/camera/integration-tests/coretestapp/src/main/java/androidx/camera/integration/core).\n\nTransform coordinates\n---------------------\n\nAnother common task is to work with the coordinates instead of the buffer, such\nas drawing a box around the detected face in preview. In cases such as this, you\nneed to transform the coordinates of the detected face from image analysis to\npreview.\n\nThe following code snippet creates a matrix that maps from image analysis\ncoordinates to `PreviewView` coordinates. To transform the (x, y) coordinates\nwith a [`Matrix`](/reference/android/graphics/Matrix), see\n[`Matrix.mapPoints()`](/reference/android/graphics/Matrix#mapPoints(float%5B%5D)). \n\n### Kotlin\n\n```kotlin\nfun getCorrectionMatrix(imageProxy: ImageProxy, previewView: PreviewView) : Matrix {\n val cropRect = imageProxy.cropRect\n val rotationDegrees = imageProxy.imageInfo.rotationDegrees\n val matrix = Matrix()\n\n // A float array of the source vertices (crop rect) in clockwise order.\n val source = floatArrayOf(\n cropRect.left.toFloat(),\n cropRect.top.toFloat(),\n cropRect.right.toFloat(),\n cropRect.top.toFloat(),\n cropRect.right.toFloat(),\n cropRect.bottom.toFloat(),\n cropRect.left.toFloat(),\n cropRect.bottom.toFloat()\n )\n\n // A float array of the destination vertices in clockwise order.\n val destination = floatArrayOf(\n 0f,\n 0f,\n previewView.width.toFloat(),\n 0f,\n previewView.width.toFloat(),\n previewView.height.toFloat(),\n 0f,\n previewView.height.toFloat()\n )\n\n // The destination vertexes need to be shifted based on rotation degrees. The\n // rotation degree represents the clockwise rotation needed to correct the image.\n\n // Each vertex is represented by 2 float numbers in the vertices array.\n val vertexSize = 2\n // The destination needs to be shifted 1 vertex for every 90° rotation.\n val shiftOffset = rotationDegrees / 90 * vertexSize;\n val tempArray = destination.clone()\n for (toIndex in source.indices) {\n val fromIndex = (toIndex + shiftOffset) % source.size\n destination[toIndex] = tempArray[fromIndex]\n }\n matrix.setPolyToPoly(source, 0, destination, 0, 4)\n return matrix\n}\n```\n\n### Java\n\n```java\nMatrix getMappingMatrix(ImageProxy imageProxy, PreviewView previewView) {\n Rect cropRect = imageProxy.getCropRect();\n int rotationDegrees = imageProxy.getImageInfo().getRotationDegrees();\n Matrix matrix = new Matrix();\n\n // A float array of the source vertices (crop rect) in clockwise order.\n float[] source = {\n cropRect.left,\n cropRect.top,\n cropRect.right,\n cropRect.top,\n cropRect.right,\n cropRect.bottom,\n cropRect.left,\n cropRect.bottom\n };\n\n // A float array of the destination vertices in clockwise order.\n float[] destination = {\n 0f,\n 0f,\n previewView.getWidth(),\n 0f,\n previewView.getWidth(),\n previewView.getHeight(),\n 0f,\n previewView.getHeight()\n };\n\n // The destination vertexes need to be shifted based on rotation degrees.\n // The rotation degree represents the clockwise rotation needed to correct\n // the image.\n\n // Each vertex is represented by 2 float numbers in the vertices array.\n int vertexSize = 2;\n // The destination needs to be shifted 1 vertex for every 90° rotation.\n int shiftOffset = rotationDegrees / 90 * vertexSize;\n float[] tempArray = destination.clone();\n for (int toIndex = 0; toIndex \u003c source.length; toIndex++) {\n int fromIndex = (toIndex + shiftOffset) % source.length;\n destination[toIndex] = tempArray[fromIndex];\n }\n matrix.setPolyToPoly(source, 0, destination, 0, 4);\n return matrix;\n}\n```"]]