Given the prevalence of Matlab in the scientific community, it is rather common for Flywheel users to want to develop a Flywheel Gear that can execute Matlab code. As Matlab users will be aware, licensing considerations are seemingly unavoidable. However, many users don't realize that you can easily (well, relatively) compile your Matlab code and share it with the community, who can then execute the code using the freely available Matlab Compiler Runtime. It's this method that we take advantage of to build and generate Matlab Gears. As a developer, you can compile your Matlab code and include the resulting binary within a Gear that is built from a base MCR image (more on that later).
General Gear building principles are still to be followed with Matlab gears, however there are a few things to keep in mind, requirements, etc., some of which are specific to Matlab Gears, and outlined below.
General Matlab Gear Requirements
Of your Matlab code
- Must have no GUI interaction. Given that FW Gears run in a "headless" environment, code that requires a GUI, or any kind of user interaction mid processing stream, are not supported.
- Must run on linux: As the FW Gear execution environment is container based, we require that your execution environment similarly be linux-based.
Of your development and execution environment
- Linux - you must be able to run and compile the code on Linux - currently, it is not possible to cross-compile for platforms. That means that Matlab code cannot be compiled on OSX, or WIN.
- Docker - Install instructions for Ubuntu are here. You will need Docker to build and test your Gear.
Of your Docker image
- You need to have (or build) docker images that contain the MCR (Matlab Compiler Runtime). Flywheel has some images that can be used as a base image for your project. MCR images can be browsed here. To reduce your development effort, it’s recommended to have a look at those images and use one that is already available.
- Important notes on the Matlab MCR
- The version of Matlab which you use to compile the code determines the version of the MCR that should be used to execute. They must match, or it will not run.
Available MCR images via Flywheel Docker Hub
Each of the Matlab MCRs that have been pre-built and are available to be pulled from DockerHub are listed in the table below.
|Matlab MCR Version|
Example Matlab Gears
Often times the best path forward is to look back (wait, what) at examples. Below are a few examples of compiled Matlab code being executed within Flywheel Gears:
- dtiinit-diffusion-maps: Generate diffusion maps (in NIfTI format), including Fractional Anisotropy (FA), Axial Diffusivity (AD), Mean Diffusivity (MD), and Radial Diffusivity (RD). The input to this Gear is a dtiInit archive, generated from either dtiInit, or as part of the AFQ processing pipeline.
- FLoc : A Flywheel Gear that can analyze data generated with the Functional localizer experiment used to define category-selective cortical regions (published in Stigliani et al., 2015).
- Gannet: Build context for a Gear that can run Gannet. Gannet is a software package designed for the analysis of edited magnetic resonance spectroscopy (MRS) data.
- DWI Split Shells: Extract individual diffusion shells from multi-shell DWI data. Output includes a NIfTI, BVEC, and BVAL file for each diffusion shell found in the data.
- DTI Error: Calculate RMSE between the measured signal and the ADC (or dSIG) based on tensor model fit provided by dtiInit.
- Apply Canonical X-Form: Reorient NIfTI data and metadata fields into RAS space by estimating and applying a canonical transform.
Contributed by Noah Mercer
<SPM_HOME>/config/spm_make_standalone.mand comment out the line in the
Compilationsection that begins with
mcc. This will skip the final compilation step that actually builds spm into its own SAE, which isn't necessary.
spm_make_standaloneat the Matlab prompt. This will configure SPM to allow it to run as part of your SAE.
<SPM_HOME>/external/fieldtrip/compat, which has a series of directories named
matlabltXXXXX. The 'lt' stands for less than and the XXXXX is a Matlab release, indicating that that code should only be included if you're running a version of Matlab less than that release. Including it in your SAE with later versions of Matlab can conflict with the Flywheel SDK. If you get a stack trace like this it might be because you've inadvertently included one of these compatibility directories:
Error using strfind
PATTERN must be a string scalar or character vector.
Error in contains (line 36)
Error in flywheel.Finder/makeArgs_ (line 90)
Error in flywheel.Finder/find_ (line 69)
Error in flywheel.Finder/findFirst (line 29)
Error in flywheel.Finder/subsref (line 19)
Error in flywheel.Client/subsref (line 269)
Error in run_gear (line 51)
Contributed by Geoff Aguirre
Compiled MATLAB code running in a GCP VM does not make use of multiple cores when invoking the parallel pool for analysis.
The GCP VM is sustained by a single physical CPU, and hyper-threading is used to create two (virtual) cores, each with two (even more virtual) “sibling” threads by additional hyper-threading. Matlab does not recognize the virtual cores as available for processing.
Using the MATLAB GUI interface for parpool management, create a parpool profile that allows the desired number of “workers”. (e.g., 2 or 4 workers for the default GCP VM used in Flywheel).
Export this profile to a file named (e.g.) flywheel.mlsettings.
Within the matlab code, use this function to create the parpool.
1) Imports the “flywheel.mlsettings” profile and makes it the default
2) Examines /proc/cpuinfo to determine the number of “cpu cores”
3) Calls the MATLAB function “maxNumCompThreads” with the found number of cores
4) Creates the parpool with the specified profile
In the run script that invokes the compiled matlab code, include the “-mrcuserdata” key-value. E.g.:
myMatlabApp -mcruserdata ParallelProfile:/usr/flywheel.mlsettings
where “myMatlabApp” is the compiled matlab code and “/usr/flywheel.mlsettings” is the path to the mlsettings profile as it is stored within the Gear (Docker image).
This procedure has allowed us to make use of the hyperthreaded cores found on GCP VMs. In testing, we have discovered that attempting to run a job using the set of 4 “siblings” within the default VM does not produce any further increase in speed. This may be dependent a bit on the nature of our calculation, which is almost entirely CPU dependent with little time for data I/O.