Understanding the Flywheel CLI
This guide explains what the Flywheel CLI is, how it works, and when to use it. Whether you're new to Flywheel or deciding between the CLI and web interface, this guide will help you understand the CLI's role in your workflow.
What you'll learn:
- How the CLI works and communicates with Flywheel
- When to use the CLI vs. the web UI
- How to choose the right data import approach
- CLI architecture and components
What is the Flywheel CLI?
The Flywheel Command-Line Interface (CLI) is a standalone program that runs on your computer and communicates with your Flywheel site through an API (Application Programming Interface). Think of it as a direct connection between your computer's command line and your Flywheel data.
How It Works
Key Components:
- CLI Program (
fw): The executable file you download and run on your computer - API Key: Your authentication credential that identifies you to Flywheel
- Flywheel API: The server-side interface that receives commands and returns data
- Commands: Actions you tell the CLI to perform (ingest, download, sync, etc.)
Authentication Flow
When you run fw login <api_key>, the CLI:
- Validates your API key with the Flywheel server
- Stores the key locally on your computer (in
~/.config/flywheel/user.json) - Includes this key with every subsequent command for authentication
Security Note: Your API key provides full access to your Flywheel account. Keep it secret and never share it.
CLI vs. Web UI: When to Use Each
Both the CLI and web interface access the same Flywheel data, but each excels at different tasks.
Use the CLI When You Need To
Import Large Datasets
- Uploading hundreds or thousands of files
- Importing entire study datasets (DICOM, BIDS)
- Need to preserve complex folder structures
- Want to automate uploads with scripts
Example: Importing 500 subjects with 2 sessions each (1,000+ individual uploads) is impractical through the web UI but straightforward with fw ingest dicom.
Download Bulk Data
- Downloading entire projects or multiple sessions
- Syncing data to external storage (S3, Google Cloud)
- Need to preserve Flywheel folder structure locally
- Want differential updates (only download changed files)
Example: Using fw sync to mirror a project to your HPC cluster for analysis.
Automate Workflows
- Scheduled or repeated operations
- Integration with other scripts or pipelines
- Batch processing across multiple projects
- Part of continuous integration workflows
Example: A nightly script that uploads new imaging data from your scanner workstation.
Work in Non-Interactive Environments
- SSH sessions to remote servers
- Docker containers
- HPC job scripts
- Environments without a graphical interface
Use the Web UI When You Need To
Browse and Explore Data
- Viewing metadata and file details
- Searching across projects
- Exploring data hierarchy visually
- Understanding data organization
Example: Looking through sessions to find specific acquisition types or checking metadata quality.
Manage Projects and Permissions
- Creating and configuring projects
- Managing user roles and permissions
- Setting up gear rules
- Configuring de-identification profiles
Example: Adding collaborators to a project and setting their permission levels.
Run Analysis Gears
- Selecting and running analysis gears
- Monitoring gear execution progress
- Reviewing gear outputs and logs
- Comparing results across sessions
Example: Running fMRIPrep on specific sessions and reviewing quality control outputs.
View and Annotate Data
- Viewing DICOM images
- Reviewing BIDS validation results
- Adding notes and tags to sessions
- Viewing analysis dashboards
Example: Reviewing T1 image quality and tagging problematic scans for exclusion.
Handle One-Off Tasks
- Uploading a single file
- Downloading one session
- Quick permission checks
- Occasional data exports
Example: Downloading analysis results from a single gear run.
Combined Workflow Example
A typical research workflow often uses both:
- Setup (Web UI): Create project, configure permissions, set up gear rules, create de-ID profile
- Data Import (CLI): Use
fw ingest dicom --de-identifyto upload study data - Quality Check (Web UI): Review uploaded data, check metadata, verify sessions imported correctly
- Analysis (Web UI): Run gears on imported data, monitor processing
- Export (CLI): Use
fw export bidsto download processed results for statistical analysis
Choosing Your Data Import Approach
The CLI provides multiple import commands. Here's how to choose the right one.
Decision Flow
Import Strategy Considerations
For PHI/Sensitive Data:
- Configure de-identification profiles BEFORE importing
- Test de-ID rules with
fw deid teston sample data - Use
--de-identifyflag during import to apply rules - This removes PHI before data leaves your network
For Large Datasets (>100 GB):
- Use
--jobsflag to increase concurrent uploads - Consider importing in batches (by subject, session, or date range)
- Monitor network bandwidth and adjust concurrency
- Use
--detect-duplicatesto prevent re-uploading existing data
For Automated Processing:
- Set up gear rules BEFORE importing data
- Rules trigger automatically as data arrives
- Test rules on a small dataset first
- This enables immediate processing without manual intervention
For Multi-Site Studies:
- Use consistent subject/session labeling across sites
- Consider using
--subjectand--sessionflags to override metadata - Create templates for non-DICOM data to ensure consistent structure
- Document your import process for reproducibility
CLI Architecture
Understanding the CLI's internal structure helps troubleshoot issues and optimize usage.
Command Structure
All CLI commands follow this pattern:
Examples:
Data Flow During Import
Configuration Files
The CLI supports configuration files to simplify complex commands:
Location: ~/.config/flywheel/config.yaml
Purpose: Store frequently-used options (API key, timezone, de-ID profiles, etc.)
Benefit: Run fw ingest dicom /data/scans psychology Study1 instead of fw ingest dicom --de-identify --deid-profile remove-phi --timezone America/New_York --jobs 8 /data/scans psychology Study1
Learn more about configuration files
Logging and Debugging
The CLI provides several levels of logging:
- Default: Shows progress and important messages
--verbose: Shows detailed operation information--debug: Shows API calls and internal operations--quiet: Suppresses all output except errors
Log Locations:
- Linux/Mac:
~/.cache/flywheel/log/cli.log - Windows:
%LOCALAPPDATA%\flywheel\log\cli.log
Troubleshooting guide | Finding logs
Performance Optimization
Upload Speed
Factors affecting upload speed:
- Network bandwidth: Your internet upload speed is usually the bottleneck
- File size: Larger files are more efficient (less overhead per file)
- Concurrency: More parallel uploads (
--jobsflag) can maximize bandwidth - Compression: Compression reduces upload size but adds CPU overhead
Optimization tips:
Download Speed
For fw download:
- Downloads files individually (best for small numbers of files)
- Use
--zipflag for multiple small files
For fw sync:
- Optimized for large-scale downloads
- Only downloads changed files on subsequent runs
- Supports parallel downloads with
--jobsflag
For fw export bids:
- Specifically optimized for BIDS structure
- Converts metadata to BIDS sidecar JSON during export
Next Steps
Now that you understand how the CLI works:
Start Using the CLI
- New to CLI? Try our Import Your First Dataset Tutorial
- Ready to automate? See Configuration Files
- Need examples? Check Usage and Examples