• Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: tools-1.5.1-incubating
    • Fix Version/s: tools-1.5.1-incubating
    • Component/s: None
    • Labels:


      I noticed bad precision in FMeasure results. I think the issue is that the current implementation is summing divisions. It computes the precision and recall for every sample, and after adds the results for each sample to compute the overall result. By doing that, the error related to each division are summed and can impact the final result.
      I found the problem while implementing the ChunkerEvaluator. To verify the evaluator I tried to compare the results we get using OpenNLP and the Perl script conlleval available at The results were always different if I process more than one sentence, because the implementation was using FMeasure.updateScores() that was summing divisions.
      To solve that and have the same results provided by conll I basically stopped using the Mean class.


        William Colen created issue -
        William Colen made changes -
        Field Original Value New Value
        Status Open [ 1 ] In Progress [ 3 ]
        William Colen made changes -
        Status In Progress [ 3 ] Open [ 1 ]
        William Colen made changes -
        Status Open [ 1 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]
        William Colen made changes -
        Status Resolved [ 5 ] Closed [ 6 ]


          • Assignee:
            William Colen
            William Colen
          • Votes:
            0 Vote for this issue
            0 Start watching this issue


            • Created: