Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Won't Do
-
3.2.0-incubating
-
None
-
None
Description
twilmes has developed gremlin-benchmark which is slated for 3.2.0 (TINKERPOP-1016). This is really good as now we can ensure the Gremlin traversal machine only speeds up with each release. Here is a collection of things I would like to be able to do with gremlin-benchmark.
Benchmarks in the Strategy Tests
// ensure that traversalA is at least 1.5 times faster than traversalB
assertTrue(Benchmark.compare(traversalA,traversalB) > 1.50d)
With this, I can have an OptimizationStrategy applied to traversalA and not to traversalB and prove via "mvn clean install" that the strategy is in fact "worth it." I bet there are other good static methods we could create. Hell, why not just have a BenchmarkAsserts that we can statically import like JUnit Asserts. Then its just:
assertFaster(traversalA,traversalB,1.50d) assertSmaller(traversalA,traversalB) // memory usage or object creation? assertTime(traversal, 1000, TimeUnits.MILLISECONDS) // has to complete in 1 second? ... ?
Its a little scary as not all computers are the same, but it would be nice to know that we have tests for space and time costs.
Benchmarks saved locally over the course of a release
This is tricky, but it would be cool if local folders (not to GitHub) were created like this:
tinkerpop3/gremlin-benchmark/benchmarks/g_V_out_out_12:23:66UT23.txt
Then a test case could ensure that all newer runs of that benchmark are faster than older ones. If its, lets say, 10%+ slower, Exception and test fails. ??
What else can we do? Can we know whether a certain area of code is faster? For instance, strategy application or requirements aggregation? If we can introspect like that, that would be stellar.