Instructions are below
You need to submit some data to work with. This can either be via cut and paste or file upload.
Each name should be on a new line. Try cutting and pasting a column from a spreadsheet if you like.
The first row of the CSV file will be taken as the column headers for the file.
Set the parameters you'd like to use during the matching phase.
Actually run the matching process.
Note on Encoding:
UTF-8 encoding is assumed throughout.
This should work seemlessly apart from in one situation.
If you download a file and open it with Microsoft Excel by double clicking on the file itself Excel may assume the wrong encoding.
To preserve the encoding import the file via File > Import > CSV and choose Unicode (UTF-8) from the "File origin" dropdown.
Files saved as CSV from Excel are UTF-8 encoded by default.
This tool is for attaching WFO name IDs to your data based on the name string you have. You submit your data, run the matching process and download a CSV file with three additional columns in:
The names you submit must be complete and include the authors. They should have one, two or three "name words". You will get unreliable results if you include varieties of subspecies (four name words). Ranks (either in full or using common abbreviations) are OK to include. Hybrid symbols will be stripped out at the start of the process.
The easiest way to get started is to cut and paste a column of names into the text box in the form and click "Submit Data". If you have the authors in a second column then it is OK to copy the two columns into the text box. The matching process will merge them.
Once you have tried it out with a few names cut and paste into the text box you could try uploading a CSV file. All the columns in the CSV file will be returned to you in the results so this technique can be used to bind WFO IDs to your local IDs and other data. If you have the name and authors in separate columns you must combine them into a single column before upload.
The matching process can be parametised. The default values are usually OK to start with but if you have uploaded a CSV file you need to specify the column that contains the name strings at a minimum.
Recommendation: Do not turn on interactive mode the first time you run the matching process. This will give an idea of how dirty the data is and how much work is needed to get to 100% matching using interactive mode.
Once you have submitted data and set the parameters you can do a matching run. If you have submitted a large file then the page may refresh multiple times so be patient.
You can do multiple matching runs on the same data, perhaps one with interactive mode off followed by a run with it turned on.
You can download the results of the matching at any time after you first run the matching process. To avoid data loss download your data frequently. Data is only stored as long as your session lasts. If you walk away and come back later it may be gone! You can upload the file you have downloaded if you want to continue an earlier session.
If an unambiguous match is not made for a name in your data then the near matches (candidates) are written to a file called candidates.csv. For each candidate name your input row is repeated along with a relative matching score. This occurs both in interactive and non-interactive modes. You can download this file if you would like to resolve issues with matching locally. The candidates.csv file is deleted at the beginning of each matching run. i.e. when you click the "Run Matching" button. The file is just logging the output of the matching process as it happens and not updated.
Recommendation: If you have 10% unmatched names and you'd like to work on them somewhere else turn off interactive mode and run the matching one last time then download the candidates.csv file. It will contain the candidates for all your unmatched names and only your unmatched names.
No limit is set on the number of names that can be matched in one go, beyond the filesize upload limit. The process works well with CSV files with tens of thousands of rows. The process will probably fail with more than one hundred thousand rows.
If you have a large number of names to match it is highly recommend you break your work into batches of logical batches of a few tens of thousands of names each. This is worth doing for the human factor alone. A large dataset may contain more ambiguous names than a human is able to disambiguate in one session.
If you frequently need to rematch many thousands of names please consider installing a local copy of this matching service (see Scalability and Performance). This is a shared resource and if the server is stressed it will slow down access to other users.