The fw sync Command
fw sync Replacement
Bulk Export is a replacement for the Flywheel fw sync command in the legacy CLI.
The fw sync command will be deprecated soon in an upcoming release.
Use the fw sync command for downloading large volumes of data.
The Flywheel CLI Sync capability allows you to sync Flywheel data, including the folder structure, from Flywheel to your computer and Amazon S3 or Google Cloud buckets.
This is the recommended method for downloading larger datasets.
Note
The fw sync command only supports one-directional syncing, which means you cannot sync from your computer or cloud storage bucket to Flywheel.
Similar to the rsync utility, the Flywheel folder structure and data will be recreated on the destination file system on the first sync.
On subsequent syncs, only the differences between the source and the destination are copied.
Prerequisites
Follow these instructions to download and install the Flywheel CLI.
Before You Begin
Ensure you have:
CLI Setup:
- Flywheel CLI installed on your computer
- Valid API key and authenticated CLI - verify with
fw status - Stable internet connection for data download
Permissions:
You need the following project-level permissions:
- Download File - Required to download files from the project
- View Metadata - Required to access project structure and metadata
To check your permissions, navigate to the project in the web UI and check your role. Learn more about user roles and permissions.
Destination Requirements:
For syncing to your local computer:
- Sufficient disk space for the project data
- Write permissions for the destination directory
- Consider using
--dry-runfirst to preview download size
For syncing to cloud storage (S3/Google Cloud):
- Cloud credentials configured before running sync
- AWS: Configure AWS credentials
- Google Cloud: Configure GCloud authentication
- Read/write access to the destination bucket
- Verify bucket path is correct and accessible
Important Notes:
fw syncis one-directional (Flywheel → destination only)- Subsequent syncs only transfer changed files (similar to rsync)
- Use
--dry-runflag to preview what will be synced without transferring files - Use
--list-onlyto see the folder tree without syncing
Instructions
- Open Terminal or Windows Command Prompt.
-
Determine the source path for your Flywheel project. It follows this structure:
fw://[GroupID]/[Project Label].You can find this path in the Flywheel UI with the following steps:
- Sign in to Flywheel.
- Go to your project.
- At the top, copy the path:

-
Determine the destination path for your Flywheel project. The destination path can be a location on your local computer or an Amazon S3/Google Cloud Bucket.
To sync to your computer
- Determine the location where you want to sync the Flywheel project on your computer.
-
Enter the following command:
fw sync [optional flags] [source-path] [destination-path]For example:
fw sync --full-project fw://psychology/"Longitudinal Anxiety Study" /local/data/project1
To sync to an Amazon S3 or Google Cloud bucket
-
Configure the credentials for your bucket. The Flywheel CLI uses these credentials to access data in the storage bucket, so you must configure them before running the sync command. The Flywheel CLI does not support passing credential parameters to it. Make sure that the authenticated user has read/write access to data in the bucket.
- AWS: See Amazon's documentation on how to use the configure command to set up your credentials.
- Learn more about creating a shared credentials file or using environmental variables to set up credentials.
- Google Cloud: See Google's documentation on how to use the gcloud auth login command to set up your credentials or learn more about the other authentication options.
- AWS: See Amazon's documentation on how to use the configure command to set up your credentials.
-
Start with the following command:
fw sync <optional flags> <SRC> <DEST> -
Replace the placeholders with the relevant info for your data and environment, and add any optional flags. Use the following format for the source:
- S3:
s3://bucket-name/key-name - Google Cloud:
gs://BUCKET_NAME1/
For example:
fw sync fw://psychology/"Anxiety Study" s3://MyStudy/DataForUpload - S3:
-
Copy and paste your command into Terminal or Windows Command prompt, and hit enter. When you use the
--full-projectoptional flag, the fw sync command creates the following hierarchy:
View Files Before Syncing
To perform a test run to preview how the project will be synced, enter the following command in Terminal or Windows Command Prompt:
fw sync --dry-run [source-path] [destination-path]
Review the audit log for a preview of the sync.
Only Sync Certain Filetypes
You may want to sync only certain filetypes to your computer. For example, you know that you want to sync DICOM files because you plan to run analyses locally. You can configure the sync to only include those filetypes.
In Terminal or Windows Command Prompt, enter the following command:
fw sync --include dicom [source-path] [destination-path]
Use Tags to Export a Subset of Data
This allows you to include or exclude data for download based on a subject, session, acquisition, analyses, or file tag.
fw sync --include-container-tags '{"container": ["some-tag"]}' [source-path] [destination-path]
Where Container is the location of the tag, and the options are: subject, session, acquisition, analyses, and file. Flywheel will sync that container and all children.
For example, if you want to download data only from subjects with the cohort1 tag, you would format it as: --include-container-tags '{"subject": ["cohort1"]}'.
Note
To filter by container tags, you must have tagged the container you wish to download using our tag management system. It is not currently possible to filter based on other metadata such as container labels.
When added to the command:
fw sync --include-container-tags '{"subject": ["cohort1"]}' fw://radiology/Study1 ~/Documents/ExportedData
Note
To filter by the analyses tag, you must include the --analyses or --full-project flag.
Filtering on More Than One Tag
It is possible to filter on more than one tag. When adding multiple tags, Flywheel uses AND logic to filter the data. This means that all tags specified must be present to download the data.
On the same container
--include-container-tag '{"session": ["cohort1", "complete"]}'
In this example, only sessions with BOTH the cohort1 and complete tag are downloaded.
On more than one type of container
--include-container-tag '{"subject": ["example", "cohort1"], "session":["review","complete"]}'
In this example, only sessions with both the review and complete tags that also belong to subjects tagged with example and cohort are downloaded.
Verify Success
After the sync completes, verify your data was synced correctly:
1. Check the CLI Output
Review the sync summary in the terminal:
- Number of files synced
- Files created, updated, or deleted
- Any errors or warnings during sync
2. Verify Destination Structure
Navigate to your destination and check the folder structure:
For local filesystem:
For cloud storage, use appropriate tools (AWS CLI, gsutil, etc.)
Expected structure with --full-project flag:
3. Check File Counts
Compare file counts between Flywheel and destination:
- Number of subjects matches
- Number of sessions per subject matches
- File counts per acquisition match
- Verify filtered data matches expectations (if using
--include,--exclude, or tag filters)
4. Review Audit Logs (if enabled)
If you used --save-audit-logs flag:
Check the audit log for:
- Files with status "completed" or "skipped"
- Any errors in the error_message column
- Actions taken (created, updated, deleted)
5. Test Subsequent Sync
Run sync again to verify only changed files are transferred:
Should show minimal or no files transferred if nothing changed.
Next Steps
After successfully syncing your data:
- Analyze locally: Use the synced data for local analysis or processing
- Automate syncs: Set up scheduled syncs using cron (Linux/Mac) or Task Scheduler (Windows)
- Upload results back: Use
fw uploadto add analysis results back to Flywheel - Monitor changes: Run periodic syncs to keep local/cloud copies up to date with Flywheel
Usage
Optional Arguments
Sync
| Optional Argument | Description |
|---|---|
-i FILE_TYPE, --include FILE_TYPE | Download only files with the specified types.* |
-e FILE_TYPE, --exclude FILE_TYPE | Ignore files with the specified types.* |
--include-container-tags T | Sync only the containers with specified tags and everything under them (e.g., --include-container-tag '{"subject": ["some-tag"]}') |
--exclude-container-tags T | Skip the containers with specified tags and everything under them (e.g., --exclude-container-tag '{"project": ["some-tag"]}') |
--include-mlset T | Sync only the subjects with specified ML Set and everything under them (eg.: --include-mlset Training) |
--exclude-mlset T | Skip the subjects with specified ML Set and everything under them (eg.: --exclude-mlset Validation) |
-a, --analyses | Include analyses |
-m, --metadata | Include metadata |
-x, --full-project | Include analyses and metadata |
-z, --no-unpack | Keep zipped DICOMs intact (default: extract) |
-l, --list-only | Show folder tree on source instead of syncing |
-v, --verbose | Show individual files with --list-only |
-n, --dry-run | Show what sync would do without transferring files |
-j N, --jobs N | The number of concurrent jobs to run (default: 4) |
--tmp-path TMP_PATH | Set custom temp dir where the zips will be extracted to (default: system temp dir) |
--delete | Delete extra files from destination |
--export-templates-file EXPORT_TEMPLATES_FILE | Set export templates YAML file |
--save-audit-logs SAVE_AUDIT_LOGS | Save audit log to the specified path on the current machine |
* Learn more about file types in Flywheel
General
| Optional Argument | Description |
|---|---|
-h, --help | Show help message and exit. |
-C PATH, --config-file | Specify configuration options via config file.* |
--no-config | Do NOT load the default configuration file. |
-y, --yes | Assume the answer is yes to all prompts. |
--ca-certs CA_CERTS | Path to a local Certificate Authority certificate bundle file. This option may be required when using a private Certificate Authority. |
--timezone TIMEZONE | Set the effective local timezone to use when uploading data. |
-q, --quiet | Squelch log messages to the console. |
-d, --debug | Turn on debug logging. |
-v, --verbose | Get more detailed output. |
* Learn more about how to create this file.
Common Errors
Common CLI Errors
For authentication, network issues, and other errors common to all CLI commands, see the CLI Troubleshooting Guide.
"No project found at "
Cause: The source path does not point to a valid Flywheel project or the project doesn't exist.
Solution:
- Verify the source path follows the format:
fw://group/project - Check the group ID and project label are correct (case-sensitive)
- Use
fw lsto browse available groups and projects - Ensure you're logged into the correct Flywheel site with
fw status
"Destination perm-check failed" (Permission Error)
Cause: Cannot write to the destination location (local filesystem, S3, or Google Cloud Storage).
Solution:
- Local filesystem: Verify you have write permissions for the destination directory
- S3: Check AWS credentials are configured correctly (
aws configure) - S3: Ensure the IAM user/role has write permissions to the bucket
- Google Cloud: Verify credentials with
gcloud auth login - Google Cloud: Ensure the service account has write permissions to the bucket
"Group and project level container tag filters are not allowed"
Cause: Attempted to use --include-container-tags or --exclude-container-tags with group or project level filters.
Solution:
- Container tag filters only work at subject, session, acquisition, analyses, and file levels
- Remove group/project from your tag filter specification
- Example of valid filter:
--include-container-tags '{"subject": ["cohort1"]}'
"Analysis container filtering only works with --analyses flag"
Cause: Used analysis-level container tags without including the --analyses flag.
Solution:
- Add the
--analysesflag to your command - Or use
--full-projectwhich includes analyses - Example:
fw sync --analyses --include-container-tags '{"analyses": ["reviewed"]}' ...
"Could not extract file" / "Possibly not a valid zipfile"
Cause: Downloaded zip file is corrupted or incomplete.
Solution:
- Check your network connection stability
- Try syncing again (command will retry 5 times automatically for network errors)
- If error persists, the file may be corrupted in Flywheel - contact support
- Check available disk space at destination
S3/GCS Credential Errors
Cause: Cloud storage credentials are not configured or are invalid.
Solution:
- AWS S3: Configure credentials with
aws configureor set environment variables - AWS S3: See AWS credential configuration
- Google Cloud: Authenticate with
gcloud auth login - Google Cloud: See Google Cloud authentication
- Verify the credentials have read/write access to the specified bucket
See Also
Related Commands:
- Download Files - Download individual files or containers
- Export BIDS Data - Export data in BIDS format
- Import Data - Guide for uploading data to Flywheel
Data Management:
- Understanding Metadata - Learn about metadata preservation during sync
- File Types in Flywheel - Understanding file type filtering
Troubleshooting:
- CLI Troubleshooting Guide - Common CLI issues and solutions