After some general concepts, we’ll present the current state of CI/CD for WoA, and we’ll dive into how to concretely set this up for the most used platforms available today. Finally, we’ll present the infrastructure we use at Linaro, in the Windows on Arm team, to track and support our work.
About CI
The rise of CI in the modern software development era has helped to increase the quality and robustness of our work and shorten release cycles. However, it has also created a lot of complexity for teams, which sometimes prevents them from understanding their own CI, and how to run it locally. If you’re interested in this topic, the two books from Martin Fowler - Continuous Integration and Continuous Delivery - are definitely a good reference, despite their age.
Here, we’ll focus on good attributes to have in your own CI infrastructure, so that it will be reliable and efficient.
Reproducibility
This will never be said enough, but your CI environment should be reproducible. A clean machine should be installed and configured with a single command, in a fully automated way. Developers should be able to use this locally without any effort. Docker, and containerization in general, is a wonderful tool for this. Alas, it’s not yet available for Windows on Arm, mainly because windows-based images are not available for WoA.
But don’t lose hope, you can always script your machine installation, so it can be easily reproduced on several machines. It’s the approach we chose in our team, and the one we currently recommend. The important thing is to have something automated.
Software is a discipline where things can be easily reproduced, so take advantage of this!
Isolation
A job run in CI should not impact the subsequent runs. Usually, this is done by provisioning virtual machines on demand with a cloud provider, or by using containerization. For now, there is no solution for this specific point on WoA. Yes, WoA VMs are available on Azure, but no existing CI system is able to provision them on demand.
Scalability
Your CI infrastructure should be able to grow and shrink on demand. Like Isolation, this can’t be done yet, but should be available at some point in the future. For now, you have to deploy enough machines to support your workload.
Ease of use
Finally, instead of writing lengthy documentation about how to build, test and deliver your program, you should just have simple build, test and deliver scripts (python may be a good candidate here), and make sure they work in your reproducible environment without any manual setup.
In addition to making your project easy to modify by any developer, your CI configuration will be really easy, and you’ll be able to seamlessly change your CI provider.
Current state of the CI ecosystem for Windows on Arm
For now, there is no CI provider offering their own Windows on Arm runners, which implies you’ll have to deploy and maintain your own machines. This is what is called a self-hosted runner.
Bring your own device
Physical machines
Apple Silicon Macs: You can run a Windows on Arm machine easily and for free using UTM. Pricey but might be worth it if you need to support that platform, too.
Cloud
Since April 2022, it is possible to run Windows on Arm on Azure.
Software
OS
We recommend running Windows 11, because it will allow you to run any x64 software thanks to emulation.
Toolchain
From Visual Studio 2022 17.4, VS and Microsoft Visual C++ (MSVC) are natively available for Arm64, offering great speedup compared to the previous emulated versions.
clang-cl is a drop-in replacement for MSVC compiler and linker. It still uses headers and libraries from MSVC and thus, is ABI compatible with MSVC.
If you rely on GCC and Mingw, these tools are not yet available for WoA. This is a work in progress, thanks to Zac Walker, who has already upstreamed patches for GNU Binutils.
llvm-mingw, available today, is a GCC replacement, targeting MinGW (thus, not compatible with MSVC ABI), and is the result of the hard work of Martin Storsjö and other people. It can even be used to cross compile WoA binaries from Linux, like VideoLAN (VLC) does!
Running CI for Windows on Arm
All existing CI providers allow you to add self-hosted runners/agents. Thanks to this, any new architecture can be supported, as long as it can execute the runner program. In this section, we’ll present how to deploy your own runner for each platform, and how to use it.
Azure Pipelines
Follow these instructions to create and configure a new agent. It should be automatically registered as a service and start with your machine.
You can then use your new agent by selecting it in your pipeline yaml description:
trigger:
- master
pool:
name: Default
demands:
- agent.name -equals <AGENT-NAME-HERE>
steps:
- script: cmd /c "echo Hello World"
displayName: 'Say Hi'
GitHub
Setting up a runner with github is really straightforward. Just follow this guide.
It will give you all the commands you need to copy-paste to download, register and launch the runner. A native Arm64 version is available.
Finally, simply use runs-on: self-hosted
to use this runner.
name: hello-world
on: push
jobs:
my-job:
runs-on: self-hosted
steps:
- name: say-hi
run: echo "Hello World!"
GitLab
On GitLab, the process needs more steps. First, obtain a token to register your runner to a project or subgroup. Then download gitlab-runner.exe (32-bit if you’re running Windows 10). Finally, register it (use shell executor).
You should assign a specific tag name for this runner, or group of runners, so you can easily refer to it in your pipelines, like win-arm64
for instance.
Finally, use the tags
entry in your pipeline to use this runner:
build:
stage: build
tags:
- win-arm64
script:
- echo "Hello World"
Jenkins
Despite its age, Jenkins is still used in a lot of companies. It follows the same principle as the other platforms: register the agent, and then refer to it in your job/pipeline.
Follow these instructions to register a new agent for Jenkins. You’ll need to install openjdk on your machine (a native ARM64 version is available).
Your agent will be qualified with one or several labels. You can then refer to those labels in your job configuration (via web interface), or in your pipeline configuration.
Linaro’s WoA CI
The goal of our team is to help open source projects to natively support Windows on Arm. After some work on Python, we started digging into LLVM, Dart/Flutter, MySQL, Node.JS, Perl and many others.
The ultimate aim is to upstream all our work, but it takes time to get patches accepted, and sometimes, the lack of CI hosted runners can be a blocking issue.
To actively keep track of the projects we work with, we needed our own CI system.
Choose a CI provider
First we had to choose a system where we could host our git repositories, and run CI from it. Because they are the most used, we selected GitLab and GitHub.
GitLab has a very nice feature: you can create subgroups of projects, which helps a lot to organise your repositories. In Github, everything is under a single organisation. So we joined the official Linaro Gitlab workspace and started our own subgroup.
Runners
Initially, we wanted to use Azure VM for this work. Azure DevOps can use Azure Scale sets, but the deployment time is noticeable and provided machines are not torn down after every pipeline, thus not offering real isolation between jobs, only scalability. Other providers offer provisioning based on containerization, which is not yet available for Windows on Arm.
Thus, the only possibility was to run the VM full time. Considering the cost, we chose to use our own machines in the Linaro lab instead. It’s hosted in Cambridge, and is used mainly to validate Linux kernels and bootloaders on a wide range of different boards. Now, it helps our team to support Windows on Arm development too!
First, we used several Surface Pro X, and then Windows Dev Kit 2023 units, which reduced our CI time by 25 to 30%. Considering the price of $600 for 8 cores / 32GB, it’s really the best price/performance ratio you can get today.
Reproducibility
To make our CI reproducible, as docker is not yet available for Windows on Arm, we developed our own solution to install all the programs we need. The problem with existing package managers on Windows (chocolatey, vcpkg, nuget, winget, …) is that they are incomplete as the whole system is not built using them, like is the case with Linux distributions. MSYS2 is a great attempt at this, but it’s only usable with open source software, and does not yet support WoA (due to lack of GCC for this platform).
Our solution, wenv, tries to fill this gap. For simplicity's sake, we have a fixed set of dependencies (the ones we used) and the tool is not designed to replace a general package management system. To ease deployment and upgrade, a single powershell command is needed (like chocolatey install) to download/upgrade wenv and can be copy-pasted.
To ensure this script is robust, it is implemented in bash, with the strict “set -euo pipefail” option. Yes, we install windows programs using Bash, isn’t that funny (and profitable)? Powershell and batch should consider adding an equivalent.
After running wenv, a script named activate.bat is generated, to set PATH to have all dependencies available, like python venv.
To run gitlab-runner.exe under wenv, we use a simple batch file:
cmd /c "C:\wenv\arm64\activate.bat" gitlab-runner.exe run-single ^
--output-limit 100000 ^
--url https://gitlab.com/ ^
--token SECRET_TOKEN ^
--name RUNNER_NAME ^
--executor shell ^
--shell powershell ^
--builds-dir C:/ci
Finally, to launch it on startup, we declare it as a service by using nssm (tutorial). Thus, all our jobs are directly executed in this reproducible environment.
Ease of use
For all the projects we work on, we create a dedicated recipe to build them. Those recipes are listed here. All the hard work is done in a script, written in any language (batch, powershell, python, bash), and this script is tested and developed inside the reproducible wenv environment.
To factorise all the code and CI configuration, we wrote packagetools. Thus, a new recipe just has to implement a specific python interface, and all the rest is done by default. It’s even possible to upload binary packages to our Azure storage as a result.
Thus, it’s easy for anyone to add a new package, or clone and reproduce an existing one.
Finally, we set up a nightly CI job, that we use to track external changes to the projects we follow, allowing us to catch regression as early as possible. This job is defined in its own repository, so it’s easy to see what we track, compared to diluting this configuration in different places.
In addition to helping us develop our patches, this system has caught several regressions on past projects, becoming a precious ally to support Windows on Arm.
Conclusion
We hope this article helped you to understand how to enable CI/CD for the new Windows on Arm platform. In addition, we brought some broader ideas on how to design your CI system in general. When in doubt, follow the KISS principle, and always keep reproducibility as a primary objective.
In the future, we expect existing CI providers to start actively supporting Windows on Arm runners in their own infrastructure, thus allowing anyone to start building and delivering for this exciting new platform.
For now, at Linaro, we’ll continue our journey to enable more open-source projects for this new platform, and launch new projects as well. So, stay tuned!