Details

    • Task
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None

    Description

      Need to provide stronger data loss check.

      Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST

      However, this is not enough since if there is strong requirement on application behavior on data loss e.g. further cache updates should throw exception - this requirement cannot currently be met even with use of cache interceptor.

      Suggestions:

      • Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to configuration
      • If node fires PART_LOST_EVT then any update to lost partition will throw (or will not throw) exception according to DataLossPolicy
      • ForceKeysRequest should be completed with exception (if plc == FAIL) if all nodes to request from are gone. So, all gets/puts/txs should fail.
      • Add a public API method in order to allow a recovery from a failed state.

      Another solution is to detect partition loss at the time partition exchange completes. Since we hold topology lock during the exchange, we can easily check that there are no owners for a partition and act as a topology validator in case FAIL policy is configured. There is one thing needed to be carefully analyzed: demand worker should not park partition as owning in case last owner leaves grid before the corresponding exchange completes.

      Attachments

        Issue Links

          Activity

            Another solution is to detect partition loss at the time partition exchange completes. Since we hold topology lock during the exchange, we can easily check that there are no owners for a partition and act as a topology validator in case FAIL policy is configured. There is one thing needed to be carefully analyzed: demand worker should not park partition as owning in case last owner leaves grid before the corresponding exchange completes.

            agoncharuk Alexey Goncharuk added a comment - Another solution is to detect partition loss at the time partition exchange completes. Since we hold topology lock during the exchange, we can easily check that there are no owners for a partition and act as a topology validator in case FAIL policy is configured. There is one thing needed to be carefully analyzed: demand worker should not park partition as owning in case last owner leaves grid before the corresponding exchange completes.
            githubbot ASF GitHub Bot added a comment -

            GitHub user VladimirErshov opened a pull request:

            https://github.com/apache/ignite/pull/407

            IGNITE-1605 implementation of data loss

            IGNITE-1605 implementation of data loss

            You can merge this pull request into a Git repository by running:

            $ git pull https://github.com/VladimirErshov/ignite ignite-1605_3

            Alternatively you can review and apply these changes as the patch at:

            https://github.com/apache/ignite/pull/407.patch

            To close this pull request, make a commit to your master/trunk branch
            with (at least) the following in the commit message:

            This closes #407


            commit 16fdb3e07782900e4e0925a6f1e7c7430c423d6f
            Author: vershov <vershov@gridgain.com>
            Date: 2016-01-15T17:01:09Z

            IGNITE-1605 implementation of data loss


            githubbot ASF GitHub Bot added a comment - GitHub user VladimirErshov opened a pull request: https://github.com/apache/ignite/pull/407 IGNITE-1605 implementation of data loss IGNITE-1605 implementation of data loss You can merge this pull request into a Git repository by running: $ git pull https://github.com/VladimirErshov/ignite ignite-1605_3 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/407.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #407 commit 16fdb3e07782900e4e0925a6f1e7c7430c423d6f Author: vershov <vershov@gridgain.com> Date: 2016-01-15T17:01:09Z IGNITE-1605 implementation of data loss

            Moving to 2.0 since the task required more thorough design.

            agoncharuk Alexey Goncharuk added a comment - Moving to 2.0 since the task required more thorough design.

            Please refer to the discussion partitions loss policy handling
            https://issues.apache.org/jira/browse/IGNITE-13003

            mmuzaf Maxim Muzafarov added a comment - Please refer to the discussion partitions loss policy handling https://issues.apache.org/jira/browse/IGNITE-13003

            People

              Unassigned Unassigned
              yzhdanov Yakov Zhdanov
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 20m
                  20m