Details
-
Bug
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
0.7, 0.8
-
None
Description
To put in context, I am just doing a scan where the start key, end key and limit are configurable:
Query<String,Persistent> dataQuery = dataStore.newQuery() ; if (startKey != null && !startKey.equals("")) { dataQuery.setStartKey(startKey); } if (endKey != null && !endKey.equals("")) { dataQuery.setEndKey(endKey); } dataQuery.setLimit(limit); Result<?,Persistent> result = dataQuery.execute(); while (result.next()) { results.put(result.getKey(), result.get()) ; }
When the start key is equal to end key, and the limit is configured to a value >= 2 (the default value is -1), the second call to result.next() in the while bucle clears the instance previously returned by result.get().
We could think that this would be an expected behaviour since result.get() especifically for HBase is a reusable instance when performing a Get operation, but this clashes with the actual expected general behaviour in the usual Scan operation shown in the former code example.
This is: next() and get() when performing a scan should behave the same no matter what initial/end keys you configure, and what maximum number of results you want.
I implemented a test than shows the issue affecting Accumulo, Cassandra, HBase, JCache, MongoDB and Solr, probably because it is some issue in the core.
To see the error, you can apply the attached patch with the tests example and execute:
mvn -Dtest=#testScanSingleResultWithLimit -fn -DfailIfNoTests=false test