The last thing the curate-bids V2 gear does is produce 5 .csv files that report the state of the BIDS curation. This article walks through these spreadsheets in detail as well as how to troubleshoot any issues, so you can move on to the next step of data processing.
Even though the BIDS Curation gear finished successfully, there is no guarantee that the project has been curated correctly. It is not possible for the gear to know if the project has been properly curated because it cannot know why a particular scan was acquired and what purpose it has in the subsequent analyses. It is important to examine the curation report in detail to make sure that every subject has no less than all of the expected acquisitions and that they end up with the proper BIDS paths and names. Additionally, errors may only be found in post-curation processing steps because BIDS App algorithms that process BIDS formatted data may have additional requirements that are not examined by the BIDS Curation gear or even the BIDS Validator. The information in the spreadsheets will help flush out as issues now instead of later on when you are running a BIDS App.
To view the report, select a Session, and then click the Analyses tab. The OUTPUTS section lists the curate-bids gear results. For example:

To view the .csv files in Flywheel, select the options menu and select View.
The spreadsheets produced are:
<group>_<project>_niftis.csv: A list of the original information (acquisition name, file name, series number, etc.) to the final BIDS path/filename. A column indicates if the path/filename is duplicated, which will result in an error for the BIDS Validator. BIDS Apps run the BIDS Validator and won't work if the BIDS path and filenames are incorrect. This spreadsheet should be checked to see if all of the files have been properly recognized or ignored.
<group>_<project>_acquisitions.csv: lists warnings if there are an unexpected number of specific acquisitions or if there are subjects that do not have the expected number of the usual acquisitions. This is useful when there are multiple subjects because it shows the “usual count” of the acquisitions for all subjects and shows subjects that do not have the usual count.
<group>_<project>_acquisitions_details_1.csv (_2.csv): lists all of the unique acquisition labels along with the number of times they have been seen. It also provides additional details that should help determine which subjects have missing or additional acquisitions.
<group>_<project>_intendedfors.csv: lists the field maps and then the paths to the files that those maps are going to be used to correct. If the IntendedFor regular expression pairs are provided, it will list the mapping provided by processing using the project curation template as the “before” results and also the after using the regexes to trim down those results.
Next, we will go over the first four spreadsheets (we'll leave the fifth for a later tutorial.)
Here is an example of a <group>_<project>_niftis.csv spreadsheet. In this example, BIDS curation test is the group and Levitas Tutorial is the project, so the file is named bids-curation-test_Levitas_Tutorial_niftis.csv:

Note that if it is difficult to see the above image, you can right-click on the image, and open it in a new tab.
This spreadsheet provides all the information necessary to understand how the Curated BIDS Path was determined for every file in the project. The Curated BIDS Path shows the BIDS folder (anat, dwi, fmap, or func) and the file name. Remember, the goal of BIDS Curation is to get the BIDS path right so when you run a gear or export data in BIDS format, the proper name is assigned to the NIfTI files, and they are placed in the proper folder.
You can also have data in the ignored folder. Data in the ignored folder is not included when you turn on BIDS View in Flywheel, use a BIDS App, or export data in BIDS format. You can add the "ignore" metadata flag to a particular file, entire sessions, or acquisitions. The default project curation template sets this flag on the acquisition if the acquisition name ends in "_ignore-BIDS". The Ignored column includes either an "S", "A", or "F" to indicate that a particular file was ignored at the session, acquisition, or file level.
After the ignored list, the next folder in the Curated BIDS Path is sourcedata. This BIDS folder holds DICOM scans from which the NIfTIs were created. Some BIDS App algorithms require source data, but many do not.
The last folder is listed in the spreadsheet as unrecognized. Data in the unrecognized folder appears in the nonBids section in BIDS View and will have their BIDS metadata field set to NA. Unrecognized data means there was no project curation template rule that recognized those files.
Below is the same niftis.csv spreadsheet scrolled down to show the sourcedata and unrecognized values in the Curated BIDS path column:
The other columns in this spreadsheet are included to help identify the files. A project curation template can use data in any of these columns to recognize a scan or initialize a BIDS field. The recommended ReproIn template is focused almost exclusively on the acquisition label, which is usually determined by the SeriesDescription DICOM tag. The ReproIn template uses regular expressions to match and extract strings from the acquisition label.
The Rule ID column indicates which rule in the template matched each file in the spreadsheet. If it is blank, no rule matched, and the file is added to the unrecognized folder. The rule refers to the project curation template, which is composed of two main sections called definitions and rules. The definitions set what information is required for a particular BIDS entity like anatomical scan, functional scan, etc, and the arrangement of that information in the BIDS file name. The rules determine how a particular file is recognized by the template in the "where" clause, and also sets some necessary BIDS fields in the "initialization" clause.
When a duplicate BIDS path/name is detected, the column, Unique? shows duplicate. Duplicates can be caused by many reasons. Sometimes a scan is repeated because the subject moved or needed to leave the magnet for a while so the same scan is restarted. Sometimes the scans are actually different but there is no acquisition name to differentiate them. The BIDS Standard provides various fields, including "Acq" to provide a way around this kind of duplicate name.
The BIDS Validator will detect duplicate scans and this will prevent the BIDS App (such as bids-fmriprep) from running. You can fix this error by renaming acquisitions, setting an ignore flag, or by modifying the project curation template.
When you see curly braces in the Curated BIDS Path, it means that some BIDS field was not properly detected. For example, if the file name is sub-DEV_ses-20180918114023_task-{file.info.BIDS.Task}_bold.nii.gz, it means that the Task was not detected.
In the ReproIn BIDS Curation template for the "reproin_func_file" rule, the Task is found by this regular expression:
"Task": { "acquisition.label": { "$regex": "(^|_)task-(?P<value>.*?)(_(acq|ce|dir|echo|mod|proc|rec|recording|run|task)-|$|_)" } },
That is, the value of the Task field is set by whatever follows "task-" in the acquisition label, which can then be followed by an underscore "_" and then by various other possible fields. If the task is missing or doesn't follow what this regular expression expects, it will be left blank and the curly braces will appear. This is not a valid BIDS name and will cause an error so this must be addressed by either renaming acquisitions or by modifying the project curation template.
If a scan is not properly named or if there is no rule in the project curation template to recognize that acquisition, it is listed in the spreadsheet as unrecognized, and will be listed under the "nonBids" section in BIDS View.
To address this problem, change the acquisition label or modify the project curation template by changing a rule so that it will apply to a particular scan or else by adding a new rule. Not all sections of the BIDS Specification are included in Flywheel's project curation templates so new rules and definitions will need to be added, especially since the BIDS Specification is a moving target.
There are three spreadsheets that describe acquisitions in the project:
<group>_<project>_acquisitions.csv
<group>_<project>_acquisitions_details_1.csv
<group>_<project>_acquisitions_details_2.csv
The acquisitions.csv file lists the common acquisition labels along with the Usual Count across all subjects. The Usual Count is the number of times a specific acquisition appears for most subjects. This is calculated by counting the number of times an acquisition with a specific name appears for all subjects (a histogram) and using the most common. This is a way to figure out which scans are important. If a particular acquisition label was not acquired for most subjects, the most common number will be zero, and it won't appear in this list. For properly named acquisitions, the important ones will likely have a number of 1, which means this scan should be acquired for all subjects. But it can also be greater than 1 if some other information is used to disambiguate the Curated BIDS name (such as the echo number or something that sets the "run-" BIDS field).
This spreadsheet lists every subject and prints errors or warnings about extra or missing scans. First it lists Subjects that have all of the Typical Acquisitions and includes warnings about extra scans. Then it lists Subjects that don't have Typical Acquisitions and provides warnings along with errors indicating that the usual scans are missing.
The acquisitions_details_1.csv file gives the total number of subjects and sessions and provides a list of all of the unique acquisition labels in the project along with the number of times that label was found. For the usual acquisitions, there should be as many of a particular label as there are subjects.
The acquisitions_details_2.csv file lists acquisition labels for each subject but only if the number found for that subject is not equal to the expected number (the number that most subjects have). These two spreadsheets are designed to find subject or acquisition labels that are outliers while the acquisitions.csv spreadsheet shows what is common for most subjects.