The Flywheel CLI Sync capability allows you to sync Flywheel data, including the folder structure, from Flywheel to your computer and Amazon S3 or Google Cloud buckets. This is the recommended method for downloading larger datasets. Note: this command only supports one-directional syncing, which means you cannot sync from your computer or cloud storage bucket to Flywheel.
Similar to the rsync utility, the Flywheel folder structure and data will be recreated on the destination file system on the first sync. On the following syncs, only the differences between the source and the destination files are copied.
Follow these instructions to download and install the Flywheel CLI.
Open Terminal or Windows Command Prompt.
-
Determine the source path for your Flywheel project. It follows this structure:
fw://[GroupID]/[Project Label]
.You can find this path in the Flywheel UI by going to:
Sign in to Flywheel.
Go to your project.
-
At the top, copy the path.
-
Determine the destination path for your Flywheel project. The destination path can be a location on your local computer or an Amazon S3/Google Cloud Bucket.
-
To sync to your computer:
Determine the location where you want to sync the Flywheel project on your computer.
-
Enter the following command
fw sync [optional flags] [source-path] [destination-path]
For example
fw sync --full-project fw://psychology/"Longitudinal Anxiety Study" /local/data/project1
-
To sync to an Amazon S3 or Google Cloud bucket:
-
Configure the credentials for your bucket. The Flywheel CLI uses these credentials to access data in the storage bucket, so you must configure them before running the sync command. The Flywheel CLI does not support passing credential parameters to it. Make sure that the authenticated user has read/write access to data in the bucket.
AWS: See Amazon's documentation on how to use the configure command to set up your credentials or learn more about creating a shared credentials file or using environmental variables to set up credentials
Google Cloud: See Google's documentation on how to use the gcloud auth login command to set up your credentials or learn more about using the other authentication options.
-
Start with the following command:
fw ingest dicom <optional flags> <SRC:path to DICOM folder> <group_ID> <project_label>
-
Replace with the relevant info for your data and environment, and add any optional flags. Use the following format for buckets:
S3:
s3://bucket-name/key-name
Google Cloud:
gs://BUCKET_NAME1/
For example:
fw ingest dicom s3://MyStudy/DataForUpload
psychology "Anxiety Study"
-
Copy and paste your command into Terminal or Windows Command prompt, and hit enter.
-
Flywheel CLI displays the data it has found.
-
Review the hierarchy and scan summary to make sure it matches what you expect.
Enter yes to begin importing. The Flywheel CLI displays its import progress.
-
Once complete, sign in to Flywheel, and view your data.
-
-
-
When you use the
--full-project
optional flag, the fw sync command creates the following hierarchy:project_label |-- project_label.flywheel.json |-- ANALYSES | |-- analyses_label | |-- analyses_label.flywheel.json | |-- INPUT | |-- OUTPUT |-- FILES | |-- filename.ext | |-- filename.ext.flywheel.io |-- SUBJECTS |-- subject_label |-- subject_label.flywheel.json |-- ANALYSES |-- FILES |-- SESSIONS |-- session_label |-- session_label.flywheel.json |-- ANALYSES |-- FILES |-- ACQUISITIONS |-- acquisition_label |-- acquisition_label.flywheel.json |-- FILES
To perform a test run to preview how the project will be synced, enter the following command from Terminal or Windows Command Prompt:
fw sync --dry-run [source-path] [destination-path]
Review the audit log for a preview of the sync
You may want to only sync certain filetypes to your computer. For example, you know that you want to sync DICOM files because you plan to run analyses locally. You configure the sync to only include those filetypes.
In Terminal or Windows Command Prompt, enter the following command:
fw sync --include dicom [source-path] [destination-path]
This allows you to include or exclude data for download based on a subject, session, acquisition, analyses, or file tag
fw sync --include-container-tags '{"container": ["some-tag"]}' [source-path] [destination-path]
Container is the location of the tag, and the options are: subject, session, acquisition, analyses, and file. Flywheel will sync that container and all children.
For example, let's say you want to download data only from subjects with the cohort1 tag, you would format it as: --include-container-tags '{"subject": ["cohort1"]}'
.
When added to the command:
fw sync --include-container-tags '{"subject": ["cohort1"]}' fw://radiology/Study1 ~/Documents/ExportedData
Note
To filter by analyses tag, you must include the --analyses
or --full-project
flag.
It is possible to filter on more than one tag. When adding multiple tags, Flywheel uses AND logic to filter the data. This means that all tags specified must be present to download the data.
-
Include more than one tag on the same container:
--include-container-tag '{"session": ["cohort1", "complete"]}'
For example, only sessions with BOTH the cohort1 and complete tag are downloaded.
-
To include more than one type of container:
--include-container-tag '{"subject": ["example", "cohort1"], "session":["review","complete"]}'
For example, only sessions with both the review and complete tags that also belong to subjects tagged with example and cohort are downloaded
Optional argument |
Description |
---|---|
-h, --help |
show help message and exit |
--config-file CONFIG_FILE, -C CONFIG_FILE |
Specify configuration options via config file |
--no-config |
Do NOT load the default configuration file |
-y, --yes |
Assume the answer is yes to all prompts |
--ca-certs CA_CERTS |
The file to use for SSL Certificate Validation |
--timezone TIMEZONE |
Set the effective local timezone for imports |
--debug |
Turn on debug logging |
--quiet |
Squelch log messages to the console |
-i T, --include T |
Sync only files with the specified types (eg.: -i dicom) |
-e T, --exclude T |
Skip files with the specified types (eg.: -e nifti -e qa) |
--include-container-tags T |
Sync only the containers with specified tags and everything under them (eg.: --include-container-tag '{"subject": ["some-tag"]}') |
--exclude-container-tags T |
Skip the containers with specified tags and everything under them (eg.: --exclude-container-tag '{"project": ["some-tag"]}') |
-include-mlset T |
Sync only the subjects with specified ML Set and everything under them (eg.: --include-mlset Training) |
--exclude-mlset T |
Skip the subjects with specified ML Set and everything under them (eg.: --exclude-mlset Validation) |
-a, --analyses |
Include analyses |
-m, --metadata |
Include metadata |
-x, --full-project |
Include analyses and metadata |
-z, --no-unpack |
Keep zipped DICOMs intact (default: extract) |
-l, --list-only |
Show folder tree on source instead of syncing |
-v, --verbose |
Show individual files with --list-only |
-n, --dry-run |
Show what sync would do without transferring files |
-j N, --jobs N |
The number of concurrent jobs to run (default: 4) |
--tmp-path TMP_PATH |
Set custom temp dir where the zips will be extracted to (default: system temp dir) |
--delete |
Delete extra files from destination |
--export-templates-file EXPORT_TEMPLATES_FILE |
Set export templates YAML file |
--save-audit-logs SAVE_AUDIT_LOGS |
Save audit log to the specified path on the current machine |