Uploaded image for project: 'Ignite'
  1. Ignite
  2. IGNITE-1605

Provide stronger data loss check

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Task
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None

    Description

      Need to provide stronger data loss check.

      Currently node can fire event - EVT_CACHE_REBALANCE_PART_DATA_LOST

      However, this is not enough since if there is strong requirement on application behavior on data loss e.g. further cache updates should throw exception - this requirement cannot currently be met even with use of cache interceptor.

      Suggestions:

      • Introduce CacheDataLossPolicy enum: FAIL_OPS, NOOP and put it to configuration
      • If node fires PART_LOST_EVT then any update to lost partition will throw (or will not throw) exception according to DataLossPolicy
      • ForceKeysRequest should be completed with exception (if plc == FAIL) if all nodes to request from are gone. So, all gets/puts/txs should fail.
      • Add a public API method in order to allow a recovery from a failed state.

      Another solution is to detect partition loss at the time partition exchange completes. Since we hold topology lock during the exchange, we can easily check that there are no owners for a partition and act as a topology validator in case FAIL policy is configured. There is one thing needed to be carefully analyzed: demand worker should not park partition as owning in case last owner leaves grid before the corresponding exchange completes.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            yzhdanov Yakov Zhdanov
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Time Tracking

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0h
                0h
                Logged:
                Time Spent - 20m
                20m

                Slack

                  Issue deployment