android{testOptions{managedDevices{localDevices{create("pixel2api30"){// Use device profiles you typically see in Android Studio.device="Pixel 2"// Use only API levels 27 and higher.apiLevel=30// To include Google services, use "google".systemImageSource="aosp"}}}}}
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-27 (世界標準時間)。"],[],[],null,["# Big test stability\n\nThe asynchronous nature of mobile applications and frameworks oftentimes makes\nit challenging to write reliable and repeatable tests. When a user event is\ninjected, the testing framework must wait for the app to finish reacting to it,\nwhich could range from changing some text on screen to a complete recreation of\nan activity. When a test doesn't have a deterministic behavior, it's *flaky*.\n\nModern frameworks like Compose or Espresso are designed with testing in mind so\nthere's a certain guarantee that the UI will be idle before the next test action\nor assertion. This is *synchronization*.\n\n### Test synchronization\n\nIssues can still arise when you run asynchronous or background operations\nunknown to the test, such as loading data from a database or showing infinite\nanimations.\n**Figure 1**: Test synchronization.\n\nTo increase the reliability of your test suite, you can install a way to track\nbackground operations, such as [Espresso Idling Resources](/training/testing/espresso/idling-resource). Also, you can\nreplace modules for testing versions that you can query for idleness or that\nimprove synchronization, such as [TestDispatcher](/kotlin/coroutines/coroutines-best-practices#test-coroutine-dispatcher) for coroutines or\n[RxIdler](https://github.com/square/RxIdler) for RxJava.\n| **Warning:** You should avoid pausing your tests for an arbitrary period (sleep) to let the app run and stabilize. This makes tests unnecessary slow or flaky, because running the same test in different environments might need more or less time to execute.\n**Figure 2**: Using sleep in tests leads to slow or flaky tests.\n\nWays to improve stability\n-------------------------\n\nBig tests can catch lots of regressions at the same time because they test\nmultiple components of an app. They typically run on emulators or devices, which\nmeans they have high fidelity. While large end-to-end tests provide\ncomprehensive coverage, they are more prone to occasional failures.\n\nThe primary measures you can take to reduce flakiness are the following:\n\n- Configure devices correctly\n- Prevent synchronization issues\n- Implement retries\n\nTo create big tests using [Compose](/develop/ui/compose/testing) or\n[Espresso](/training/testing/espresso), you typically start one of your\nactivities and navigate as a user would, verifying that the UI behaves correctly\nusing assertions or screenshot tests.\n\nOther frameworks, such as [UI\nAutomator](/training/testing/other-components/ui-automator), allow for a bigger\nscope, as you can interact with the system UI and other apps. However, UI\nAutomator tests might require more manual synchronization so they tend to be\nless reliable.\n| **Note:** Many third-party testing frameworks use UI Automator to run tests, so the same principles apply. Prefer Espresso and Compose Test APIs to create UI tests.\n\nConfigure devices\n-----------------\n\nFirst, to improve the reliability of your tests, you should make sure that the\ndevice's operating system doesn't unexpectedly interrupt the execution of the\ntests. For example, when a system update dialog is shown on top of other apps or\nwhen the space on disk is insufficient.\n\nDevice farm providers configure their devices and emulators so normally you\ndon't have to take any action. However, they might have their own configuration\ndirectives for special cases.\n\n### Gradle-managed devices\n\nIf you manage emulators yourself you can use [Gradle-managed\ndevices](/studio/test/gradle-managed-devices) to define what devices to use to\nrun your tests: \n\n android {\n testOptions {\n managedDevices {\n localDevices {\n create(\"pixel2api30\") {\n // Use device profiles you typically see in Android Studio.\n device = \"Pixel 2\"\n // Use only API levels 27 and higher.\n apiLevel = 30\n // To include Google services, use \"google\".\n systemImageSource = \"aosp\"\n }\n }\n }\n }\n }\n\nWith this configuration, the following command will create an emulator image,\nstart an instance, run the tests and shut it down. \n\n ./gradlew pixel2api30DebugAndroidTest\n\nGradle-managed devices contain mechanisms to retry in the event of device\ndisconnections and other improvements.\n\nPrevent synchronization issues\n------------------------------\n\nComponents that do background or asynchronous operations can lead to test\nfailures because a test statement was executed before the UI was ready for it.\nAs a test grows in scope, it increases the chances of becoming flaky. These\nsynchronization issues are a primary source of flakiness because the test\nframeworks need to deduce if an activity is *done* loading or if it should wait\nlonger.\n| **Warning:** Adding arbitrary sleep commands should be avoided because they slow down the execution of the tests and don't eliminate flakiness.\n\n### Solutions\n\nYou can use [Espresso's idling\nresources](/training/testing/espresso/idling-resource) to indicate when an app\nis busy, but it's hard to track every asynchronous operation, especially in very\nbig end-to-end tests. Also, idling resources can be hard to install without\npolluting the code under test.\n\nInstead of estimating whether an activity is busy or not, you can make your\ntests wait until specific conditions have been met. For example, you can wait\nuntil a specific text or component is shown in the UI.\n**Figure 3.** Waiting for conditions to be met reduces flakiness.\n\nCompose has a collection of testing APIs as part of the\n[`ComposeTestRule`](/reference/kotlin/androidx/compose/ui/test/junit4/ComposeTestRule#waitUntil(kotlin.Long,kotlin.Function0))\nto wait for different matchers: \n\n fun waitUntilAtLeastOneExists(matcher: SemanticsMatcher, timeout: Long = 1000L)\n\n fun waitUntilDoesNotExist(matcher: SemanticsMatcher, timeout: Long = 1000L)\n\n fun waitUntilExactlyOneExists(matcher: SemanticsMatcher, timeout: Long = 1000L)\n\n fun waitUntilNodeCount(matcher: SemanticsMatcher, count: Int, timeout: Long = 1000L)\n\nAnd a generic API that takes any function that returns a boolean: \n\n fun waitUntil(timeoutMillis: Long, condition: () -\u003e Boolean): Unit\n\nExample usage: \n\n composeTestRule.waitUntilExactlyOneExists(hasText(\"Continue\")\u003c/code\u003e)\u003c/p\u003e\u003c/td\u003e\n\n| **Key Point:** Use Idling Resources if needed in small UI tests, and wait-until APIs in bigger UI tests.\n\nRetry mechanisms\n----------------\n\nYou should fix flaky tests, but sometimes the conditions that make them fail are\nso improbable that they are hard to reproduce. While you should always keep\ntrack of and fix flaky tests, a retrying mechanism can help maintain developer\nproductivity by running the test a number of times until it passes.\n\nRetries need to happen at multiple levels to prevent issues, such as:\n\n- Connection to the device timed out or lost connection\n- Single test failure\n\nInstalling or configuring retries depends on your testing frameworks and\ninfrastructure, but typical mechanisms include:\n\n- A JUnit rule that retries any test a number of times\n- A retry *action* or *step* in your CI workflow\n- A system to restart an emulator when it's unresponsive, such as Gradle-managed devices.\n\n| **Key Point:** Add retrying mechanisms to big tests, but always fix flaky tests."]]