I'll take it, ive done the unibigram approach already (maybe we can just have it as a separate filter option), so the bigram should be easy.
My original design, just lets you provide a BitSet of script codes. (this would be simple i think to parse from say a solr factory).
I think its also useful to have an option, for whether the filter should only do this for "joined" text or not (based on offsets). For CJK i think it makes sense to enforce this, so that it won't bigram across sentence boundaries. But for say the Tibetan language, where you have a syllable separator, you would want to turn this off.
Separately, if you want it to work "just like" CJKTokenizer, please be aware that by default, the unicode standard tokenizes Katakana to words (only hiragana and han are tokenized to codepoints). So in this case you would have to use a custom ruleset if you wanted katakana to be tokenized to codepoints instead of words, for later bigramming. I'm not sure you want to do this though... (in truth CJKTokenizer bigrams ANYTHING out of ascii, including a lot of things it shouldnt).
For hangul the same warning applies, but its more debatable, you might want to do this if you don't have a decompounder... but in my opinion this is past tokenization, and its the same problem you have with german, etc... the default tokenization is not "wrong".
In either case, if you decide to do that, it would be a pretty simple ruleset!
Let me know if this makes sense to you.