注意:本頁面是指 Camera2 套件。除非應用程式需要 Camera2 的特定低階功能,否則建議使用 CameraX。CameraX 和 Camera2 均支援 Android 5.0 (API 級別 21) 以上版本。
Android 9 (API 級別 28) 導入多鏡頭功能。自發布以來,裝置上市後,就一直處於支援這個 API 的市場。許多多相機用途都與特定的硬體設定緊密結合。換句話說,並非所有用途都與所有裝置相容,因此多相機功能是 Play Feature Delivery 的理想選擇。
常見的用途包括:
- 縮放:根據裁剪區域或所需的焦距切換鏡頭。
- 深度:使用多部相機建立深度地圖。
- 散景:使用推論深度資訊模擬類似數位單眼相機的窄聚焦範圍。
邏輯攝影機和實體相機之間的差異
瞭解多鏡頭 API 需要瞭解邏輯與實體相機之間的差異。請考慮使用配備三個後置鏡頭的裝置。在這個範例中,三個後置鏡頭都視為實體相機。邏輯相機則是由兩個以上的實體相機組成分組邏輯攝影機的輸出內容可能是來自其中一個基礎實體攝影機的串流,或是同時來自多個底層實體攝影機的融合串流。不論是哪一種方式,串流都會由相機硬體抽象層 (HAL) 處理。
許多手機製造商會開發第一方相機應用程式,這類應用程式通常預先安裝在他們的裝置上。如要使用所有硬體功能,這類 API 可能會使用私人或隱藏 API,或從其他應用程式無法存取的驅動程式實作中進行特殊處理。部分裝置提供來自不同實體相機的融合影格串流,但僅限於特定具有特殊權限的應用程式,藉此實作邏輯相機的概念。通常只有其中一個實體相機能接觸到架構。Android 9 之前的第三方開發人員情況如下圖所示:
自 Android 9 起,Android 應用程式將不再支援私人 API。隨著架構支援多相機,Android 最佳做法強烈建議手機製造商針對朝相同方向的所有實體相機提供邏輯相機。第三方開發人員應在搭載 Android 9 以上版本的裝置中看到以下內容:
邏輯相機提供的功能完全取決於相機 HAL 的 OEM 實作。舉例來說,Pixel 3 等裝置實作邏輯相機時,會根據要求的焦距和裁剪區域,選擇其中一部實體相機。
多鏡頭 API
這個新的 API 新增了下列新常數、類別和方法:
CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
CameraCharacteristics.getPhysicalCameraIds()
CameraCharacteristics.getAvailablePhysicalCameraRequestKeys()
CameraDevice.createCaptureSession(SessionConfiguration config)
CameraCharacteritics.LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE
-
OutputConfiguration
和SessionConfiguration
由於 Android 相容性定義說明文件 (CDD) 有所變更,多相機 API 也會滿足開發人員的某些期望。配備雙鏡頭的裝置是 Android 9 之前推出的版本,但同時開啟多部相機會同時發生嘗試和錯誤。在 Android 9 以上版本中,多相機模式會提供一組規則,指定何時可以開啟屬於相同邏輯相機的一對實體相機。
在大多數情況下,搭載 Android 9 以上版本的裝置會揭露所有實體相機 (可能使用紅外線等較不常見的感應器類型除外),以及容易使用的邏輯相機。對於每個保證正常運作的串流組合,可將一個屬於邏輯相機的一個串流替換為基礎實體相機的兩個串流。
同時觀看多部串流影片
同時使用多個相機串流:其中涵蓋在單一相機內同時使用多個串流的規則。另外,一項規則也會套用至多部攝影機。CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
說明如何將邏輯 YUV_420_888 或原始串流替換為兩個實體串流。也就是說,每個 YUV 或 RAW 類型的串流可替換為兩個相同類型和大小的串流。您可先從下列單一鏡頭裝置的攝影機串流開始:
- 串流 1:YUV 類型,
MAXIMUM
大小來自邏輯相機id = 0
然後,支援多相機的裝置讓您建立工作階段,取代了具有兩個實體串流的邏輯 YUV 串流:
- 串流 1:YUV 類型,
MAXIMUM
大小來自實體相機id = 1
- 串流 2:YUV 類型,
MAXIMUM
大小來自實體相機id = 2
只有在 CameraCharacteristics.getPhysicalCameraIds()
下方的邏輯相機分組包含這兩個鏡頭時,您才能將 YUV 或 RAW 串流替換為兩個相等的串流。
該架構提供的保證只是從多個實體相機同時取得影格所需的最低限度。大多數裝置都支援額外的串流,有時甚至允許獨立開啟多部實體相機裝置。由於這個架構並不在架構上保證,因此需要用心測試和錯誤,對各裝置執行測試和微調。
使用多個實體攝影機建立工作階段
在支援多相機的裝置上使用實體相機時,請開啟單一 CameraDevice
(邏輯相機),並在單一工作階段中與該裝置互動。使用在 API 級別 28 中新增的 API CameraDevice.createCaptureSession(SessionConfiguration config)
建立單一工作階段。工作階段設定有多個輸出設定,每個設定都有一組輸出目標,並可選擇有所需的實體相機 ID。
擷取要求具有相關聯的輸出目標。架構會根據附加的輸出目標,判斷要求將要求傳送至哪一個實體 (或邏輯) 相機。如果輸出目標對應至做為輸出設定傳送的其中一個輸出目標與實體相機 ID,則該實體相機會接收並處理這項要求。
使用兩台實體攝影機
多鏡頭的相機 API 還有另一個能力,能夠識別邏輯鏡頭並找出其背後的實體相機。您可以定義函式來找出可能的兩組實體相機,用來取代其中一個邏輯相機串流:
Kotlin
/** * Helper class used to encapsulate a logical camera and two underlying * physical cameras */ data class DualCamera(val logicalId: String, val physicalId1: String, val physicalId2: String) fun findDualCameras(manager: CameraManager, facing: Int? = null): List{ val dualCameras = MutableList () // Iterate over all the available camera characteristics manager.cameraIdList.map { Pair(manager.getCameraCharacteristics(it), it) }.filter { // Filter by cameras facing the requested direction facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing }.filter { // Filter by logical cameras // CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA requires API >= 28 it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains( CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA) }.forEach { // All possible pairs from the list of physical cameras are valid results // NOTE: There could be N physical cameras as part of a logical camera grouping // getPhysicalCameraIds() requires API >= 28 val physicalCameras = it.first.physicalCameraIds.toTypedArray() for (idx1 in 0 until physicalCameras.size) { for (idx2 in (idx1 + 1) until physicalCameras.size) { dualCameras.add(DualCamera( it.second, physicalCameras[idx1], physicalCameras[idx2])) } } } return dualCameras }
Java
/** * Helper class used to encapsulate a logical camera and two underlying * physical cameras */ final class DualCamera { final String logicalId; final String physicalId1; final String physicalId2; DualCamera(String logicalId, String physicalId1, String physicalId2) { this.logicalId = logicalId; this.physicalId1 = physicalId1; this.physicalId2 = physicalId2; } } ListfindDualCameras(CameraManager manager, Integer facing) { List dualCameras = new ArrayList<>(); List cameraIdList; try { cameraIdList = Arrays.asList(manager.getCameraIdList()); } catch (CameraAccessException e) { e.printStackTrace(); cameraIdList = new ArrayList<>(); } // Iterate over all the available camera characteristics cameraIdList.stream() .map(id -> { try { CameraCharacteristics characteristics = manager.getCameraCharacteristics(id); return new Pair<>(characteristics, id); } catch (CameraAccessException e) { e.printStackTrace(); return null; } }) .filter(pair -> { // Filter by cameras facing the requested direction return (pair != null) && (facing == null || pair.first.get(CameraCharacteristics.LENS_FACING).equals(facing)); }) .filter(pair -> { // Filter by logical cameras // CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA requires API >= 28 IntPredicate logicalMultiCameraPred = arg -> arg == CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA; return Arrays.stream(pair.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)) .anyMatch(logicalMultiCameraPred); }) .forEach(pair -> { // All possible pairs from the list of physical cameras are valid results // NOTE: There could be N physical cameras as part of a logical camera grouping // getPhysicalCameraIds() requires API >= 28 String[] physicalCameras = pair.first.getPhysicalCameraIds().toArray(new String[0]); for (int idx1 = 0; idx1 < physicalCameras.length; idx1++) { for (int idx2 = idx1 + 1; idx2 < physicalCameras.length; idx2++) { dualCameras.add( new DualCamera(pair.second, physicalCameras[idx1], physicalCameras[idx2])); } } }); return dualCameras; }
實體攝影機的狀態處理方式是由邏輯相機控管。如要開啟「雙鏡頭」,請開啟與實體相機相對應的邏輯相機:
Kotlin
fun openDualCamera(cameraManager: CameraManager, dualCamera: DualCamera, // AsyncTask is deprecated beginning API 30 executor: Executor = AsyncTask.SERIAL_EXECUTOR, callback: (CameraDevice) -> Unit) { // openCamera() requires API >= 28 cameraManager.openCamera( dualCamera.logicalId, executor, object : CameraDevice.StateCallback() { override fun onOpened(device: CameraDevice) = callback(device) // Omitting for brevity... override fun onError(device: CameraDevice, error: Int) = onDisconnected(device) override fun onDisconnected(device: CameraDevice) = device.close() }) }
Java
void openDualCamera(CameraManager cameraManager, DualCamera dualCamera, Executor executor, CameraDeviceCallback cameraDeviceCallback ) { // openCamera() requires API >= 28 cameraManager.openCamera(dualCamera.logicalId, executor, new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice cameraDevice) { cameraDeviceCallback.callback(cameraDevice); } @Override public void onDisconnected(@NonNull CameraDevice cameraDevice) { cameraDevice.close(); } @Override public void onError(@NonNull CameraDevice cameraDevice, int i) { onDisconnected(cameraDevice); } }); }
除了選擇要開啟相機外,這項程序與在舊版 Android 中開啟相機的程序相同。使用新的工作階段設定 API 建立擷取工作階段,即可告知架構將特定目標與特定實體相機 ID 建立關聯:
Kotlin
/** * Helper type definition that encapsulates 3 sets of output targets: * * 1. Logical camera * 2. First physical camera * 3. Second physical camera */ typealias DualCameraOutputs = Triple?, MutableList ?, MutableList ?> fun createDualCameraSession(cameraManager: CameraManager, dualCamera: DualCamera, targets: DualCameraOutputs, // AsyncTask is deprecated beginning API 30 executor: Executor = AsyncTask.SERIAL_EXECUTOR, callback: (CameraCaptureSession) -> Unit) { // Create 3 sets of output configurations: one for the logical camera, and // one for each of the physical cameras. val outputConfigsLogical = targets.first?.map { OutputConfiguration(it) } val outputConfigsPhysical1 = targets.second?.map { OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId1) } } val outputConfigsPhysical2 = targets.third?.map { OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId2) } } // Put all the output configurations into a single flat array val outputConfigsAll = arrayOf( outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2) .filterNotNull().flatMap { it } // Instantiate a session configuration that can be used to create a session val sessionConfiguration = SessionConfiguration( SessionConfiguration.SESSION_REGULAR, outputConfigsAll, executor, object : CameraCaptureSession.StateCallback() { override fun onConfigured(session: CameraCaptureSession) = callback(session) // Omitting for brevity... override fun onConfigureFailed(session: CameraCaptureSession) = session.device.close() }) // Open the logical camera using the previously defined function openDualCamera(cameraManager, dualCamera, executor = executor) { // Finally create the session and return via callback it.createCaptureSession(sessionConfiguration) } }
Java
/** * Helper class definition that encapsulates 3 sets of output targets: ** 1. Logical camera * 2. First physical camera * 3. Second physical camera */ final class DualCameraOutputs { private final List
logicalCamera; private final List firstPhysicalCamera; private final List secondPhysicalCamera; public DualCameraOutputs(List logicalCamera, List firstPhysicalCamera, List third) { this.logicalCamera = logicalCamera; this.firstPhysicalCamera = firstPhysicalCamera; this.secondPhysicalCamera = third; } public List getLogicalCamera() { return logicalCamera; } public List getFirstPhysicalCamera() { return firstPhysicalCamera; } public List getSecondPhysicalCamera() { return secondPhysicalCamera; } } interface CameraCaptureSessionCallback { void callback(CameraCaptureSession cameraCaptureSession); } void createDualCameraSession(CameraManager cameraManager, DualCamera dualCamera, DualCameraOutputs targets, Executor executor, CameraCaptureSessionCallback cameraCaptureSessionCallback) { // Create 3 sets of output configurations: one for the logical camera, and // one for each of the physical cameras. List outputConfigsLogical = targets.getLogicalCamera().stream() .map(OutputConfiguration::new) .collect(Collectors.toList()); List outputConfigsPhysical1 = targets.getFirstPhysicalCamera().stream() .map(s -> { OutputConfiguration outputConfiguration = new OutputConfiguration(s); outputConfiguration.setPhysicalCameraId(dualCamera.physicalId1); return outputConfiguration; }) .collect(Collectors.toList()); List outputConfigsPhysical2 = targets.getSecondPhysicalCamera().stream() .map(s -> { OutputConfiguration outputConfiguration = new OutputConfiguration(s); outputConfiguration.setPhysicalCameraId(dualCamera.physicalId2); return outputConfiguration; }) .collect(Collectors.toList()); // Put all the output configurations into a single flat array List outputConfigsAll = Stream.of( outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2 ) .filter(Objects::nonNull) .flatMap(Collection::stream) .collect(Collectors.toList()); // Instantiate a session configuration that can be used to create a session SessionConfiguration sessionConfiguration = new SessionConfiguration( SessionConfiguration.SESSION_REGULAR, outputConfigsAll, executor, new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) { cameraCaptureSessionCallback.callback(cameraCaptureSession); } // Omitting for brevity... @Override public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) { cameraCaptureSession.getDevice().close(); } }); // Open the logical camera using the previously defined function openDualCamera(cameraManager, dualCamera, executor, (CameraDevice c) -> // Finally create the session and return via callback c.createCaptureSession(sessionConfiguration)); }
如要瞭解支援的串流組合,請參閱 createCaptureSession
。合併串流適用於單一邏輯相機上的多個串流。相容性會延伸至採用相同設定,並將其中一個串流替換為同一邏輯相機內兩個實體相機中的兩個串流。
備妥相機工作階段後,請分派所需的擷取要求。每個擷取要求的目標都會從相關聯的實體相機 (如果有的話) 接收資料,或退回邏輯相機。
Zoom 應用實例
您可以將實體攝影機合併為單一串流,讓使用者可以在不同的實體相機之間切換,體驗不同的視野,有效擷取不同的「縮放等級」。
請先選取兩組實體相機,讓使用者切換使用。為獲得最大效果,您可以選擇兩組提供最小和最大焦距的鏡頭。
Kotlin
fun findShortLongCameraPair(manager: CameraManager, facing: Int? = null): DualCamera? { return findDualCameras(manager, facing).map { val characteristics1 = manager.getCameraCharacteristics(it.physicalId1) val characteristics2 = manager.getCameraCharacteristics(it.physicalId2) // Query the focal lengths advertised by each physical camera val focalLengths1 = characteristics1.get( CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F) val focalLengths2 = characteristics2.get( CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F) // Compute the largest difference between min and max focal lengths between cameras val focalLengthsDiff1 = focalLengths2.maxOrNull()!! - focalLengths1.minOrNull()!! val focalLengthsDiff2 = focalLengths1.maxOrNull()!! - focalLengths2.minOrNull()!! // Return the pair of camera IDs and the difference between min and max focal lengths if (focalLengthsDiff1 < focalLengthsDiff2) { Pair(DualCamera(it.logicalId, it.physicalId1, it.physicalId2), focalLengthsDiff1) } else { Pair(DualCamera(it.logicalId, it.physicalId2, it.physicalId1), focalLengthsDiff2) } // Return only the pair with the largest difference, or null if no pairs are found }.maxByOrNull { it.second }?.first }
Java
// Utility functions to find min/max value in float[] float findMax(float[] array) { float max = Float.NEGATIVE_INFINITY; for(float cur: array) max = Math.max(max, cur); return max; } float findMin(float[] array) { float min = Float.NEGATIVE_INFINITY; for(float cur: array) min = Math.min(min, cur); return min; } DualCamera findShortLongCameraPair(CameraManager manager, Integer facing) { return findDualCameras(manager, facing).stream() .map(c -> { CameraCharacteristics characteristics1; CameraCharacteristics characteristics2; try { characteristics1 = manager.getCameraCharacteristics(c.physicalId1); characteristics2 = manager.getCameraCharacteristics(c.physicalId2); } catch (CameraAccessException e) { e.printStackTrace(); return null; } // Query the focal lengths advertised by each physical camera float[] focalLengths1 = characteristics1.get( CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS); float[] focalLengths2 = characteristics2.get( CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS); // Compute the largest difference between min and max focal lengths between cameras Float focalLengthsDiff1 = findMax(focalLengths2) - findMin(focalLengths1); Float focalLengthsDiff2 = findMax(focalLengths1) - findMin(focalLengths2); // Return the pair of camera IDs and the difference between min and max focal lengths if (focalLengthsDiff1 < focalLengthsDiff2) { return new Pair<>(new DualCamera(c.logicalId, c.physicalId1, c.physicalId2), focalLengthsDiff1); } else { return new Pair<>(new DualCamera(c.logicalId, c.physicalId2, c.physicalId1), focalLengthsDiff2); } }) // Return only the pair with the largest difference, or null if no pairs are found .max(Comparator.comparing(pair -> pair.second)).get().first; }
有效的架構為使用兩個 SurfaceViews
,也就是每個串流一個。這些 SurfaceViews
會根據使用者互動進行替換,因此任何時間都只會出現一個。
以下程式碼顯示如何開啟邏輯相機、設定相機輸出內容、建立相機工作階段,以及啟動兩個預覽串流:
Kotlin
val cameraManager: CameraManager = ... // Get the two output targets from the activity / fragment val surface1 = ... // from SurfaceView val surface2 = ... // from SurfaceView val dualCamera = findShortLongCameraPair(manager)!! val outputTargets = DualCameraOutputs( null, mutableListOf(surface1), mutableListOf(surface2)) // Here you open the logical camera, configure the outputs and create a session createDualCameraSession(manager, dualCamera, targets = outputTargets) { session -> // Create a single request which has one target for each physical camera // NOTE: Each target receive frames from only its associated physical camera val requestTemplate = CameraDevice.TEMPLATE_PREVIEW val captureRequest = session.device.createCaptureRequest(requestTemplate).apply { arrayOf(surface1, surface2).forEach { addTarget(it) } }.build() // Set the sticky request for the session and you are done session.setRepeatingRequest(captureRequest, null, null) }
Java
CameraManager manager = ...; // Get the two output targets from the activity / fragment Surface surface1 = ...; // from SurfaceView Surface surface2 = ...; // from SurfaceView DualCamera dualCamera = findShortLongCameraPair(manager, null); DualCameraOutputs outputTargets = new DualCameraOutputs( null, Collections.singletonList(surface1), Collections.singletonList(surface2)); // Here you open the logical camera, configure the outputs and create a session createDualCameraSession(manager, dualCamera, outputTargets, null, (session) -> { // Create a single request which has one target for each physical camera // NOTE: Each target receive frames from only its associated physical camera CaptureRequest.Builder captureRequestBuilder; try { captureRequestBuilder = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); Arrays.asList(surface1, surface2).forEach(captureRequestBuilder::addTarget); // Set the sticky request for the session and you are done session.setRepeatingRequest(captureRequestBuilder.build(), null, null); } catch (CameraAccessException e) { e.printStackTrace(); } });
接下來只提供一個使用者介面,讓使用者在兩個介面之間切換,例如按鈕或輕觸兩下 SurfaceView
。您甚至可以執行某種形式的情境分析,並在兩個串流之間自動切換。
鏡頭變形
所有鏡頭都會產生一定程度的變形。在 Android 中,您可以使用 CameraCharacteristics.LENS_DISTORTION
查詢鏡頭產生的變形,這會取代現已淘汰的 CameraCharacteristics.LENS_RADIAL_DISTORTION
。對於邏輯相機,扭曲度極低,應用程式可以使用鏡頭到畫面的增減數量。實體相機的鏡頭設定可能會大不相同,特別是在廣角鏡頭上。
部分裝置可能會透過 CaptureRequest.DISTORTION_CORRECTION_MODE
實作自動變形校正。大多數的裝置都會預設開啟扭曲修正功能。
Kotlin
val cameraSession: CameraCaptureSession = ... // Use still capture template to build the capture request val captureRequest = cameraSession.device.createCaptureRequest( CameraDevice.TEMPLATE_STILL_CAPTURE ) // Determine if this device supports distortion correction val characteristics: CameraCharacteristics = ... val supportsDistortionCorrection = characteristics.get( CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES )?.contains( CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY ) ?: false if (supportsDistortionCorrection) { captureRequest.set( CaptureRequest.DISTORTION_CORRECTION_MODE, CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY ) } // Add output target, set other capture request parameters... // Dispatch the capture request cameraSession.capture(captureRequest.build(), ...)
Java
CameraCaptureSession cameraSession = ...; // Use still capture template to build the capture request CaptureRequest.Builder captureRequestBuilder = null; try { captureRequestBuilder = cameraSession.getDevice().createCaptureRequest( CameraDevice.TEMPLATE_STILL_CAPTURE ); } catch (CameraAccessException e) { e.printStackTrace(); } // Determine if this device supports distortion correction CameraCharacteristics characteristics = ...; boolean supportsDistortionCorrection = Arrays.stream( characteristics.get( CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES )) .anyMatch(i -> i == CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY); if (supportsDistortionCorrection) { captureRequestBuilder.set( CaptureRequest.DISTORTION_CORRECTION_MODE, CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY ); } // Add output target, set other capture request parameters... // Dispatch the capture request cameraSession.capture(captureRequestBuilder.build(), ...);
在這個模式下設定拍攝要求,可能會影響相機產生的畫面更新率。您可以選擇只針對靜態圖片擷取設定變形校正。