Solano Platform Status Update- November 2016

First and foremost, we want to apologize for delays that some customers have recently experienced with Solano CI. We are in the process of addressing a chain of related issues that emerged over the last week. We’d like to explain what these issues are and how we’ve been addressing them.

The first issue emerged on Tuesday, Nov. 1, when Solano CI began to experience capacity constraints that limited the number of sessions we could support in our SaaS production environment. To the best of our understanding, this capacity issue was due to an unusually high demand for the AWS instance type that Solano CI commonly uses for its production workloads. For some time, Solano was unable to allocate new, healthy AWS instances with our preferred type and region.

In an attempt to mitigate this issue, the Solano team migrated capacity to different instance types that we determined as suitable backups when our preferred types are unavailable. These instance types required enough additional storage that using many of them bumped against an account-level volume limit size imposed by AWS; this killed some instances in use. Because of decreased supply (fewer instances available) and increased demand (restarting killed sessions), build queues backed up. We manually managed them during peak hours to limit slowness and worked with AWS to increase that storage limit, but it took multiple days to complete the process. With the higher limit, we are again able to spin up sufficient capacity for our full peak load.

Coupled with that, the high number of restarts affected other parts of the Solano system. Last week’s unusual load stressed Solano’s infrastructure, including the database, in unprecedented ways. Although most issues are resolved, we are working on one component that seems to be causing some customers’ builds to restart unexpectedly. This is still affecting queue throughput, and is now the critical priority for our engineering team.

Again, we apologize for the effect this has had. We understand how valuable fast builds are to your productivity, and are working to correct the problem as quickly as possible. We will continue to update this blog as more information is available. In the meantime, please don’t hesitate to contact if you have any questions or concerns.


The Solano Support Team

Leave a comment

Welcome to Solano- Test execution

Welcome to `Welcome to Solano!` In this blog series, we’ll examine a few key concepts and features of Solano Labs’ flagship product, Solano CI. The goal of this series is to help orient them while learning to use the system, while refreshing current users too; we hope you find them handy. Please let us know what you think or if you have any questions.


The key goal of any CI server is to execute automated tests written to evaluate whether software code functions correctly and thereby provide the development team with enough confidence to release that code to users and customers. While the notion of ‘functional correctness’ is a thorny subject, roughly speaking it can mean that the code causes its constituent app to behave in a way that conforms to the developers’ intentions as a discrete thing, and when interacting with the rest of the app’s codebase.

Automated testing, then, is crucial to the success of software development, since it provides feedback about the quality of the codebase and its behavior. And so Solano CI was created as a market-leading way of executing those tests in a fast and efficient manner. It does this by standardizing the method of test execution and providing easily understood reporting of those tests’ results. Below are brief explanations of the two primary means of controlling and implementing that execution: the `solano.yml’ file, and its parallel workers.

Solano will use the user-provided settings in a solano.yml file that is added to the code repo to be tested to determine which tests to run (or commands to execute). (See our docs on configuring with that file here.) The most typical way of selecting which tests you want to run is by test pattern, which filters down the whole list of test files to the desired subset using the syntax of the test file names. You can configure multiple patterns to run in a build.

The YAML file provides a centralized and version-controlled way of configuring the build environment, setup process, and tests run, as well as a source of truth for historical review of individual builds.

Once that set of tests is determined, Solano CI will use a scheduling algorithm driven by machine-learning that takes into account various metadata, like historical runtimes of each of the tests and their IO and CPU usage, to determine the most efficient order in which to execute the tests and the best way to subdivide this set of tests into smaller batches for optimal load balancing and compute utilization. Those batches are then automatically distributed to the parallel workers for execution. And as you continue to run builds in Solano over time, and add or remove tests, Solano will continue to automatically optimize to achieve the best runtimes for your builds.

As each batch is disseminated to the workers, Solano CI will then use whichever command you’ve configured to perform the tests. We recommend using the same command you use locally so that the results in Solano CI should be the same in both environments. If they’re not, we’ve prepared a Troubleshooting Guide that you should reference for suggested debugging steps.

Leave a comment

Solano CI integrates with Google Container Registry


Today, we are proud to announce our integration of Solano CI with Google Container Registry (GCR). With GCR and Solano CI, you can now reliably build, test, and deploy your Docker workflow without operating your own container repositories or scaling your infrastructure. GCR is a fully-managed, secure, and highly-available Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. In addition to the simplified workflow, you can configure policies to manage permissions and control access to your images using Google Identity and Access Management (IAM). There is no easier way to gain fine-grain permission control over your Docker images today.

We decided to create an example repository to demonstrate integrating GCR and Solano CI. Each commit to the repository will pull a container from GCR, trigger a build, and when all of the tests pass, the new Docker image will be pushed to GCR. Here’s the step-by-step process we used:

Set up necessary projects / permissions

Create a new project within Google Cloud Platform (GCP) for storing your container images.

You’ll need to create a service account, and download the JSON key file in order for Solano CI to authenticate with GCR. More information on creating a service account and downloading the .json key file are in Google’s support docs.

Once you have a new account, you’ll need to add the credentials to your Solano CI repository. Sign up for an account at our website and locate the repository you’re going to be using for your docker container. Download the Solano CLI and from within your repository, run:

solano suite

This will set up your Solano CI build environment.

Use the Secure Environment Variables UI or the Solano CLI to securely add the credentials:

$ solano add:repo GCR_PROJECT_ID project_id_here
$ solano add:repo GCR_SERVICE_ACCOUNT_JSON '{downloaded_json_blob_here: true}'

Push initial image to GCR

We’ll be pulling our initial image from GCR, so you’ll need to build and deploy your docker image using the Google SDK’s gcloud command.

Install the Google Cloud SDK

Follow instructions on pushing your docker image to GCR


Set up Build & Deploy Scripts

The environment section of the solano.yml needs to be modified to use your container’s information.


 'DEPLOY_GCR': 'true'
 'GCR_DOCKER_APP': 'your_app_name_here'
 'GCR_DOCKER_USER': 'your_docker_user_here'
 'GCR_DOMAIN': '' (your region's registry url can be found here)
 'GCR_DOCKER_TAG': 'latest'

timeout_hook: 900

 pre_setup: ./scripts/
 post_build: ./scripts/ 

 - sudo docker run $GCR_DOCKER_USER/$GCR_DOCKER_APP:$GCR_DOCKER_TAG bash -c /

Once your solano.yml looks correct, make sure to commit it to your repository.

Our example uses two scripts to build and deploy the GCR containers.  These scripts can be modified to fit your specific use case, but are set up now to use the environment variables supplied in the solano.yml

Start a Build

From within your repository simply use:

$ solano run

and a Solano CI build will initiate. It will run the build script which will attempt to authenticate using the credentials you securely stored in the first step, and if successful will pull down the container you pushed in the second step.

For the test phase, Solano will run the docker container that was pulled down, and execute the `` script from within the container. If this `` script exits successfully, Solano CI will start the deploy script.


The deploy script will tag the newly build container, and push it back to GCR for use in the future.


That’s all there is to it! As you can see, GCR is a powerful and robust tool for all your container storage needs!

For more information on deploying to Docker using Solano CI, visit our documentation or contact us, we will be happy to answer your questions 🙂 

Leave a comment

Secure Environment Variables UI


While it has been possible to set secure environment variables from the Solano CLI for some time (see docs here), there has been no way to do it from the app or to see what values are currently set. We have added a new page to Organization settings that will allow Org admins to manage variables that are not set from a config file.


We are also adding a new tab to the report page that will list the environmental variables in order of precedence for a build. This page will display the list of Variables, their scope, and their value stared out. Org Admins will be able to toggle showing the actual values.



Have any questions or comments about this or any other Solano CI features – please don’t hesitate to contact us via email, twitter, or in the comments section below.


1 Comment

Easy Continuous Deployment You Can Trust ft. Apica


Thank you to everyone who joined our webinar! In case you were unable to do so, then worry not. The full presentation, including every slide and resource, is available in this post.

In this presentation, Brian and Troy demonstrated a continuous deployment release process using GitHub, Solano CI, Apica, and AWS CodePipeline. The release process includes smoke, unit, integration and load tests to guarantee issue-free deployments, enabling a safe and easy push-button deploy process.

Brian and Troy also demonstrated how to:

  • Build your software release pipeline by connecting different steps into an automated workflow using AWS CodePipeline
  • Capture accurate and consistent performance data for each of your releases
  • Automatically stop a release when performance or errors don’t meet predefined criteria
  • Auto-parallelize your test runs with a Continuous Integration server using Solano CI

Check out the full presentation video here

Lastly, we would like to thank Apica for co-hosting this webinar, and AWS for their support. We <3 our partners!

Any questions/feedback? Let us know! We would love to have you present at our next webinar, stay tuned for details!

Leave a comment

Continuous Integration at BigCommerce


This guest post was written by Deepa Padmanabhan, Lead Quality Engineer at BigCommerce, a large e-commerce platform enabling hundreds of businesses and processing over $9 Billion in sales. This is their experience with using Solano CI, as well as Solano Services to custom-tailor solutions to address their specific needs, and continue scaling seamlessly to deliver value to the customers that rely on them. Keep up with everything else BigCommerce engineering is working on by visiting their blog

BigCommerce is an e-commerce platform that powers hundreds of online stores across continents. Our hosted shopping cart service processes billions of dollars in total sales from multiple merchants worldwide. Customers rely on us to operate a professional website that generates online revenue for their business. Over hundreds of new features are added to the platform that enables clients to ‘Sell More’ successfully. It is imperative that such a powerful service be highly available, reliable and robust. Quality is critical at BigCommerce and we try relentlessly to deliver high quality products day in and day out for the customers who rely on us.

Our engineering philosophy advocates quality at every level starting from design to product release. With over 100 engineers working on our codebase and delivering product continuously we have a product lifecycle that requires us to follow rigorous testing process. Automating the tests contributes to a significant part of our quality process to ensure high quality and frequent deployment to customers. Every engineer takes immense pride in automating tests as new features are developed and automation team ensures validating every change using these tests. Here is the deployment flow we follow at BigCommerce to release high quality code to customers


Continuous Deployment Flow

continuous deployment flow

Automated tests have grown significantly and the need for regularly testing the quality of the builds and to catch issues instantly has led us to invest in a continuous integration tool. At BigCommerce we rely on Solano Labs for continuous integration and we are happy we went this route. We explored many other options including maintaining our own CI machines but Solano CI outweighed all of them for a number of reasons. There are several features that Solano CI provides that has greatly improved the engineering team’s efficiency.


Highlighting some of the notable features

Build Parallelization:

With developers merging code from different time zones, we had a large number of test builds queued up constantly. UI test suites took more than 5 hours to run causing significant delays in production release, less frequent test runs and accumulating breaking checkins. Solano CI’s build parallelization made this a breeze, where multiple test suites can be run simultaneously. Each build is further parallelized to run tests within a suite across multiple workers. This has significantly reduced the total time needed to run a suite. Our basic suite runs in less than 4 minutes and UI suite in 30 minutes thus significantly improving test feedback time. This has enabled us to implement feature branch monitoring thereby catching breaking changes even before they are merged.


Customization with yml:

Several customizations to fit our need have been made using the solano.yml config file. Ability to create stores per batch has made tests less flappy and avoids concurrency issues due to parallel test runs.  An internal implementation of test reruns, executes failed tests on the same Solano CI build and updates the final Solano CI output to reflect the rerun results. This has helped us in generating more robust results by eliminating flappers and avoiding manual validation of flappers. Solano Labs also went the extra mile to modify the Solano phpunit and suggested changes to custom test runner.



Grouping tests by environment and/or feature area is another important need for BigCommerce. We group tests by type (UI, API, HTTP), features (cart, checkout) and priority (release blocker, long running tests). Solano Build Profiles feature exactly translates to this need. It allows us to pass profile specific environment variables thereby targeting small test sets and executes only those. Solano API also provides a way to set environment variables per session on the fly thereby changing each test group and running different set of tests depending on the feature branch we are testing. This helps developers to narrow down and test only the product areas that are affected by their change and not wait for all the unrelated tests to run.


Solano API:

Solano Webhooks has helped us automatically kick off and monitor build status without manual intervention. Our deployment workflow has been highly customized to deploy every merge and kick off tests on Solano CI automatically. Integrating Solano CI with our internal deployment tool has helped us to automatically push to production on successful test runs or alert various channels calling for attention on failed runs. Using Solano API we have built an internal result monitoring tool that provides more granular analytics on test results.


Build Prioritization:

A new feature was rolled out just to address our immediate need where production release test builds were waiting in queue for long during busy days. This caused deployment delays and frustration among release team and customers. Profile prioritization enabled moving important test builds up the queue and to be picked up immediately. Production test runs are instant and don’t wait for long anymore. This has significantly improved release cycle time.


Customer Support:

Their unparalleled customer support makes it easier for us to address issues immediately. Session specific environment setup and build prioritization are some of the features they rolled out quickly that helped in accelerating our test execution time. Solano Labs has played a major role in helping us setup and rollout continuous integration within a short amount of time. Their continuous support has helped us go a long way in delivering high quality reliable product in a timely fashion. At the moment, every single merge to mainstream is monitored for quality and every single production deploy is guarded by regression suites run on Solano CI.


Contact us to learn how Solano CI and Solano Services can help address your organization’s unique needs too. To learn more about BigCommerce and their e-commerce platform’s latest, follow this link






Leave a comment

Custom Enumeration


Our recent changes to the way we handle SCM caching (read more here), has allowed us to improve the way Solano CI handles test enumeration. Previously test enumeration happened before any user provided commands were ran. Enumeration was thus limited to finding files that matched ruby globs and combining them with a list of tests provided in a yml file. While this works in a lot of cases its not very flexible– enter Custom Enumeration.

Custom Enumeration allows more control over which tests are run on a build. This feature adds a new hook that will replace the standard enumeration (old way) with a run time generated list of tests. Since the list of tests is generated at run time it is possible to have much more fine grained control over which tests are ran on your builds. The basic requirements of Custom Enumeration are a command to run (in most cases a script) that generates a json list of tests to run. That’s it, you have complete control over the way a build will decide what tests it will run. We have made some examples of custom enumeration scripts available here and the docs are available here.


Here are a few examples of what can be done with Custom Enumeration.

  • Recently Edited Files
  • Failed on last build
  • Integrations with other services (e.g. Jira, Github issues)
Recently Edited Files

This example uses a naive method of guessing which tests were likely affected by recent changes. It looks at what files have been changed since the last passed build and attempts to map those files to matching test cases. This enumeration would be used in a plan as the first step, and the second step would either run the remaining tests or run a full build. This type of enumeration tries to use a small subset of tests to fail sooner on builds that will fail.

Failed on last build

This type of enumeration can be used for multiple purposes.

  1. Could be used as the default profile and be made to run only tests that failed on the last build, or if all tests passed then run all tests. This could be helpful on development branches when you are working on fixing tests that are failing.
  2. This type of enumeration could also be used to have a rerun profile as the last step of a plan if you know that there are flaky tests in the build.

In the next post we will look at a specific example of using custom enumeration with parallel commands.

Have any questions or comments about this or any other Solano CI features – please don’t hesitate to contact us via email, twitter, or in the comments section below.

Leave a comment

Welcome to Solano: Concurrency and Parallelism

Welcome to Welcome to Solano! In this blog series, we’ll examine a few key concepts and features of Solano Labs’ flagship product, Solano CI. The goal of this series is to help orient new users while learning to use the system, and serve as a refresher for current users; we hope you find them handy. Please let us know what you think in the comments or if you have any questions send us a note at


Solano CI is a powerful and complex system; it is, by design, highly flexible in its configuration and potential scale so that it can support the large variety of use cases that CI/CD systems are often relied upon to perform. However, at its core, Solano CI’s primary function is to help its users produce higher quality software.

One way it does this is by returning the results of automated tests quickly. To do this, Solano CI supports the ability to perform multiple actions at the same time and thereby reduce turnaround time by simply doing more at once. There are two simple ways that Solano CI does this: (1) running multiple build “sessions” at once, or “Concurrency;” and (2) automatically subdividing the tasks performed within a session and distributing that work to “workers” to perform them at the same time, or “Parallelism.”

The number of concurrent sessions a user can perform at once is set by that user’s account plan. Each session occupies what we call a “build slot,” and users can only run as many sessions at once as their plan has build slots. When all build slots are occupied by actively running sessions, subsequent sessions that are started are queued until a running session finishes and a slot becomes available. [Editor’s Note: We’ll discuss Queues in more detail in a subsequent post; we’ll provide a link to that post here once it’s posted.]

Our free trial and metered hourly plans, e.g. the Large plan, by default all have two build slots. At the Pro and Enterprise plan levels, the number is customizable.

As a session goes through the build setup process, Solano CI spins up a number of parallel workers to actually perform tasks, usually tests, the user wants to execute. These parallel workers are Docker-based containers, and can run either all together on a single VM or on an orchestrated set of VMs. When the workers are up, the system will scan the repo’s filesystem for test files based on the user’s configuration settings, collect that set of files, and then subdivide it into discrete batches that are distributed to the workers for execution. This distribution of the batches over time over the duration of the test execution phase of the session is our automatic parallelism.

One important conceptual note: the number of workers used per session is set as part of the plan configuration, like the number of build slots, and are the same for all of a plan’s build slots, regardless of which repo they build, and are not free-ranging. This is often confusing for many new users, as they often think of workers as just a set of units in a resource pool that can be arbitrarily configured on a per-repo basis, and so if a smaller repo is built they want to redirect the superfluous workers towards other larger repos’ builds; this isn’t possible in Solano CI. A helpful analogy for constructing a mental model is that of lanes in a bowling alley: just as a bowling lane uses a predetermined number of pins and all other lanes use that same number, so to build slots have a number of workers they use by default as set by the user’s plan.

The number of workers per build slot is highly diverse by plan, even within the set of metered plans. A list of the workers by metered plan is available on our Product page; the Pro and Enterprise plans’ worker configurations are customizable.

Leave a comment

Testing at Tobi


The following is a guest post by Josh Brown, Senior DevOps Engineer at Tobi. Tobi is an online fashion label, a unique combination of fashion, technology, and retail requiring a world-class engineering process in order to maintain and grow such a robust e-Commerce platform. Solano CI helped Tobi achieve these goals by reducing their testing time to the time it takes to get a cup of coffee— coffee time! 

At Tobi, we serve millions of customers from around the world with an e-commerce platform that we built from scratch. Not only that, but all of our internal applications, like inventory tracking and product lifecycle management, are all built in-house. The entirety of our company’s infrastructure is built by us and so we’re acutely attuned to and invested in its performance and reliability. This need for a great and reliable platform and experience by both ourselves and our customers has led us to automated testing and Continuous Integration. Testing and CI are crucial to us, and here I’ll outline how we’re testing and with what systems we currently use, and why.

Our codebase is over 9 years old. Many of our engineers, then, interact with code that may not have changed in some time, let alone actually working directly on that code itself! So they rely on our unit, functional, and integration tests to confidently make changes to those portions of the code that have been frozen for years. These give us the confidence to build on that legacy with the assurance that it serves as a stable foundation.

Since we have over 400 employees using our in-house applications daily, we want engineers to write as many tests as possible so that our employees can execute optimally.  Our team is encouraged to write a lot automated tests and know that our CI system can handle as many tests as the codebase requires to remain stable. CI is a core part of our workflow and all engineers at Tobi utilize it for our multiple, daily releases.

The infrastructure we use to run our build and test process is based around Solano CI. We came to Solano CI in early 2016 from CircleCI, where our builds were taking about 50 minutes (a significant improvement over the 1.5 hours they took before on Jenkins). We liked both Circle and Solano because they were SaaS offerings (so we didn’t have to manage the underlying system). Both rely on a YAML configuration file to configure the build environment, select which tests to run, and manage parallelism.

The biggest reason we switched to Solano CI was, in a phrase, “coffee time.” Solano’s team encouraged me to think about what my team could do if we could get our build and test results back in the time it takes to get a cup of coffee, and intrigued, I challenged them to show me that Solano CI could make that happen. The system’s automated scheduling and parallelism were impressive, and between my conceptual understanding of setting up through a YAML config file from CircleCI and Solano’s docs I got most of the way by myself. But the biggest win, and the reason we decided to buy, was their team’s support: I took my own work to them and they offered recommendations for more efficient ways to do things like set up the database and precompile assets that, ultimately, got us to coffee time. That expertise was really valuable, and their Support team has continually shown that the Solano team are experts of CI and its best practices.

As Tobi’s operations have grown our processes needed to as well, and this set up is the current iteration of our continued efforts to create a strong testing infrastructure. We can confidently produce a resilient and dependable e-commerce platform for our customers and supporting backend infrastructure for our colleagues. And with the testing backstop secure we can focus on what matters most – providing the best service experience and bringing our customers amazing fashion.


You can learn more about Tobi and what they are up to by clicking here

Leave a comment

Announcing Build Pipelines

We’re very excited to announce the release of Solano Build Pipelines. Build Pipelines are a way for you to chain together multiple Solano CI sessions into a seamless Continuous Deployment pipeline.


Solano Build Pipeline Animated


Each pipeline step represents a separate Solano CI session, so each runs with its own set of Solano’s parallel workers. This means that you can utilize the power and flexibility of Solano CI’s auto-parallelization to get faster build results, thereby accelerating your deploy times. Writing a deploy step can be as simple as executing a custom shell script; or you can utilize one of our many deployment integrations. Pipelines will only continue execution if every step preceding a particular step successfully passes.

Along with the release of Build Pipelines, you’re now able to run Solano Build Profiles in parallel! This means that Build Matrices can also run in parallel by default. To enable this functionality users can change their plan key to pipeline, or can contact and we can turn on the functionality for the plan key.

We’re pumped to see how you’re planning on using Build Pipelines. Tweet us @SolanoLabs (and me @b_kendz) with your crazy pipeline configurations – see if you can come up with something useful that’s crazier than this pipeline:

a crazy Solano Build Pipeline

If you want to learn more about Build Pipelines, check out our documentation, or contact us directly at

Leave a comment