LogoLogo
  • Welcome to Redefine
    • About
    • Quick Start ⏱️
      • Verification Examples
    • How Does It Work? 🔬
  • Configuration
    • Install Command
    • Configuration Parameters
    • Parallel Test Execution
      • Redefine Parallel
      • Remote Workers
        • Delayed Workers and Reruns
    • Selection Modes
      • Discover
      • Optimize
      • Fail-Fast
      • Prioritize
    • CI Platforms
    • Redefine Flow
  • Troubleshooting
    • Verify Troubleshooting
      • Environment Troubleshooting
      • Git Troubleshooting
      • Testing Frameworks
        • Cypress Troubleshooting
        • Pytest Troubleshooting
  • Integrations
    • Supported Technologies
    • AI Slack Notifications
Powered by GitBook
On this page
  1. Welcome to Redefine

How Does It Work? 🔬

PreviousVerification ExamplesNextInstall Command

Last updated 1 year ago

Redefine uses a machine learning model to prioritize tests, a process grounded in the innovative research of Predictive Test Selection. For further details on Predictive Test Selection, please see the .

Redefine's model is trained daily on more than 100 unique features (attributes), such as the specific code changes, the author of the changes, and their relationship to each test.

Decision Rules

Beyond its model, Redefine's decision engine also implements a set of rules to ensure an optimal developer experience, irrespective of the decisions made by the model.

Rerun Failed Tests Rule

To ensure an optimal debugging process of failed tests, any tests that failed on a specific git branch will be automatically included in the subsequent build. This approach is implemented to prioritize the best possible developer experience.

Skip Failures from the Main Branch Rule

To prevent feature branches from being blocked by failed tests that accidentally made their way into the main branch, Redefine will automatically ignore these tests until they pass. This is done by considering the starting point (base commit) of the feature branch. It's important to note that different failed tests may be considered based on whether the base commit is older or newer. By adapting in this way, Redefine ensures a smoother testing process and prevents feature branches from being blocked.

Skip Flaky Test Rule

Exploration Rule

Given that the test selection model is trained on historical test outcomes, any new tests will be automatically executed for up to several dozen builds. This exploration process is crucial for our model to gather sufficient information about the test, enabling it to predict its relevance with high accuracy.

Test or Test-File Renaming/Moving

When a test is renamed or moved to a different file, it will be treated as a new test. Consequently, Redefine will automatically execute the test without utilizing the test selection model. Therefore, it is important to exercise caution when dealing with large refactors. In such cases, it is highly recommended to switch Redefine mode to Discover before implementing a large-scale refactor that alters test signatures.

Test File Changed Rule

In case a test file is changed, it will automatically run in the next builds, even if other rules say to skip it. This ensures you can fix any issues with the test. So, even if there is a Skip Failures from the Main Branch Rule or a Skip Flaky Test Rule, this one takes priority to make sure tests are fixed and reliable.

This is enabled by default in Optimize mode, while in Prioritize and Fail-Fast this rule is disabled by default. To change the default setting use the configuration.

To improve your developer experience and reduce flakiness in your CI, you can manually set up a rule to handle flaky tests. By using the configuration, you can specify a threshold above which tests that tend to be unreliable will be skipped.

If you want to see how each test is rated for its flakiness, you can visit the . There, you can explore your tests and find out their respective flakiness scores.

Test Inspection Dashboard
case study by Meta
skip_known_failures
flaky_filter_threshold
Bipartite graph illustrating file-test correlations