• Type: Sub-task
    • Status: In Progress
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: 2.9
    • Component/s: data structures
    • Labels:


      We need to add IgniteMultimap data structure in addition to other data structures provided by Ignite. IgniteMultiMap should have similar API to java.util.Map class in JDK, but support the semantics of multiple values per key, similar to Guava Multimap.

      However, unlike in Guava, our multi-map should work with Lists, not Collections. Lists should make it possible to support the following methods:

      // Gets value at a certain index for a key.
      V get(K, index);
      // Gets all values for a collection of keys at a certain index.
      Map<K, V> getAll(Collection<K>, index);
      // Gets values for specified indexes for a key.
      List<V> get(K, Iterable<Integer> indexes);
      // Gets all values for a collection of keys at specified indexes.
      Map<K, Collection<V>> getAll(Collection<K>, Iterable<Integer> indexes);
      // Gets values for specified range of indexes, between min and max.
      List<V> get(K, int min, int max);
      // Gets all values for a collection of keys for a specified index range, between min and max.
      Map<K, Collection<V>> getAll(Collection<K>, int min, int max);
      // Gets all values for a specific key.
      List<V> get(K);
      // Gets all values for a collection of keys.
      Map<K, List<V>> getAll(Collection<K>);
      // Iterate through all elements with a certain index.
      Iterator<Map.Entry<K, V>> iterate(int idx);
      // Do we need this?
      Collection<IgniteTuple<Integer V>> get(K, IgniteBiPredicate<Integer, V>)

      Multimap should also support colocated and non-colocated modes, similar to IgniteQueue and its implementation, GridAtomicCacheQueueImpl.

      Design Details

      The most natural way to implement such map, would be to store every value under a separate key in an Ignite cache. For example, let's say that we have a key K with multiple values: V0, V1, V2, .... Then the cache should end up with the following values K0, V0, K1, V1, K2, V2, etc. This means that we need to wrap user key into our own, internal key, which will also have index field.

      Also note that we need to collocate all the values for the same key on the same node, which means that we need to define user key K as the affinity key, like so:

      class MultiKey<K> {
          private K key;
          int index;

      Look ups of values at specific indexes becomes very simple. Just attach a specific index to a key and do a cache lookup. Look ups for all values for a key should work as following:

      MultiKey key;
      V v = null;
      int index = 0;
      List<V> res = new LinkedList<>();
      do {
          v = cache.get(MultiKey(K, index));
          if (v != null)
      while (v != null);
      return res;

      We could also use batching for performance reason. In this case the batch size should be configurable.

      int index = 0;
      List<V> res = new LinkedList<>();
      while (true) {
          List<Key> batch = new ArrayList<>(batchSize);
          // Populate batch.
          for (; index < batchSize; index++)
              batch.add(new MultiKey(K, index % batchSize);
          Map<Key, V> batchRes = cache.getAll(batch);
          // Potentially need to properly sort values, based on the key order,
          // if the returning map does not do it automatically.
          if (res.size() < batch.size())
      return res;


      Evictions in the IgniteMultiMap should have 2 levels: maximum number of keys, and maximum number of values for a key. The maximum number of keys should be controlled by Ignite standard eviction policy. The maximum number of values for a key should be controlled by the implementation of the multi-map. Either eviction parameter should be configurable.


          Issue Links



              • Assignee:
                dsetrakyan Dmitriy Setrakyan
              • Votes:
                3 Vote for this issue
                22 Start watching this issue


                • Created: