@Robert: Yes, there must be a way to tell whether or not the language even has a profile, through some well defined method. It's not important HOW we improve detection certainty, but comparing the top n distances could help. I'm also a fan of including other metrics than profile similarity if that can help, however for unique scripts that will automatically be covered by profile similarity. Detailed solution discussions should continue in TIKA-369.
Macro languages: See TIKA-493
It makes sense to allow for detecting languages outside 639-1, and I believe RFC3066 and BCP47 are both re-using the 639 codes, so that if there is a 2-letter code for a language it will be used. 639-1 is what "everyone" already knows.
In general, improvements should be done in Tika space, then use those in Solr, thus building one strong language detection library.
@Grant: I actually planned to do the regEx based field name mapping in a separate UpdateProcessor, to make things more flexible. Example:
Your thought of allowing to detect language for individual fields in one go is also interesting. I'd love to see metadata support in SolrInputDocument, so that one processor could annotate a @language on the fields analyzed. Then next processor could act on metadata to rename field...
@Yonik: By allowing regex naming of field names, we give users a generic tool to avoid field name clashes, by picking the pattern.. Mapping multiple languages to same suffix also makes sense.