Welcome to Solano- Test execution

Welcome to `Welcome to Solano!` In this blog series, we’ll examine a few key concepts and features of Solano Labs’ flagship product, Solano CI. The goal of this series is to help orient them while learning to use the system, while refreshing current users too; we hope you find them handy. Please let us know what you think or if you have any questions.

Cheers!

The key goal of any CI server is to execute automated tests written to evaluate whether software code functions correctly and thereby provide the development team with enough confidence to release that code to users and customers. While the notion of ‘functional correctness’ is a thorny subject, roughly speaking it can mean that the code causes its constituent app to behave in a way that conforms to the developers’ intentions as a discrete thing, and when interacting with the rest of the app’s codebase.

Automated testing, then, is crucial to the success of software development, since it provides feedback about the quality of the codebase and its behavior. And so Solano CI was created as a market-leading way of executing those tests in a fast and efficient manner. It does this by standardizing the method of test execution and providing easily understood reporting of those tests’ results. Below are brief explanations of the two primary means of controlling and implementing that execution: the `solano.yml’ file, and its parallel workers.

Solano will use the user-provided settings in a solano.yml file that is added to the code repo to be tested to determine which tests to run (or commands to execute). (See our docs on configuring with that file here.) The most typical way of selecting which tests you want to run is by test pattern, which filters down the whole list of test files to the desired subset using the syntax of the test file names. You can configure multiple patterns to run in a build.

The YAML file provides a centralized and version-controlled way of configuring the build environment, setup process, and tests run, as well as a source of truth for historical review of individual builds.

Once that set of tests is determined, Solano CI will use a scheduling algorithm driven by machine-learning that takes into account various metadata, like historical runtimes of each of the tests and their IO and CPU usage, to determine the most efficient order in which to execute the tests and the best way to subdivide this set of tests into smaller batches for optimal load balancing and compute utilization. Those batches are then automatically distributed to the parallel workers for execution. And as you continue to run builds in Solano over time, and add or remove tests, Solano will continue to automatically optimize to achieve the best runtimes for your builds.

As each batch is disseminated to the workers, Solano CI will then use whichever command you’ve configured to perform the tests. We recommend using the same command you use locally so that the results in Solano CI should be the same in both environments. If they’re not, we’ve prepared a Troubleshooting Guide that you should reference for suggested debugging steps.

Post a Comment