The SARG is a best effort for optimization. It doesn't guarantee that all types and predicates are supported. For example, if you do a UDF like "where MyUdf(X) = 20" that can't be translated in to a SARG. Even for predicates that can be handled, currently only blocks of rows are tested and are accepted if any of the rows may pass the complete filter. In all cases the complete filter is still applied by Hive. SARGs are just optimizing which groups of rows need to be read from HDFS at all.
That said, we should add decimal, date, and timestamp support to SARGs. That will be a bigger project and I'll file a separate jira.
This issue is about preventing the optimization from causing run time errors. smile