HBase
  1. HBase
  2. HBASE-8626

RowMutations fail when Delete and Put on same columnFamily/column/row

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Won't Fix
    • Affects Version/s: 0.94.7, 0.95.0
    • Fix Version/s: None
    • Component/s: regionserver
    • Labels:
      None
    • Environment:

      Ubuntu 12.04, HBase 0.94.7

      Description

      When RowMutations have a Delete followed by Put to same column family or columns or rows, only the Delete is happening while the Put is ignored so atomicity of RowMutations is broken for such cases.

      Attached is a unit test where the following tests are failing:

      • testDeleteCFThenPutInSameCF: Delete a column family and then Put to same column family.
      • testDeleteColumnThenPutSameColumn: Delete a column and then Put to same column.
      • testDeleteRowThenPutSameRow: Delete a row and then Put to same row
      1. TestRowMutations.java
        10 kB
        Vinod
      2. tests_for_row_mutations1.patch
        11 kB
        Vinod
      3. 8626-v1.txt
        12 kB
        Ted Yu

        Activity

        Hide
        Vinod added a comment -

        Test case to reproduce the issue.

        Show
        Vinod added a comment - Test case to reproduce the issue.
        Hide
        Vinod added a comment -

        Attached unit test patch for trunk

        Show
        Vinod added a comment - Attached unit test patch for trunk
        Hide
        Varun Sharma added a comment -

        I think this is manifestation of https://issues.apache.org/jira/browse/HBASE-2256

        This is a known issue and I dont think we can fix it since the delete and put get the same timestamp. The client needs to specify a latter timestamp for the put(s) than the deletes manually to get around this.

        Show
        Varun Sharma added a comment - I think this is manifestation of https://issues.apache.org/jira/browse/HBASE-2256 This is a known issue and I dont think we can fix it since the delete and put get the same timestamp. The client needs to specify a latter timestamp for the put(s) than the deletes manually to get around this.
        Hide
        Varun Sharma added a comment -

        Or one thing we could do is to have HBase give a higher timestamp to all ops other than Delete ops in RowMutations and give the Delete ops lower timestamp. That would be one way to fix this...

        Show
        Varun Sharma added a comment - Or one thing we could do is to have HBase give a higher timestamp to all ops other than Delete ops in RowMutations and give the Delete ops lower timestamp. That would be one way to fix this...
        Hide
        Jean-Marc Spaggiari added a comment -

        But if you do that, if someone try to do a put then a delete, the delete will not be considered...

        At the end, I will just consider this as default HBase reaction...

        Show
        Jean-Marc Spaggiari added a comment - But if you do that, if someone try to do a put then a delete, the delete will not be considered... At the end, I will just consider this as default HBase reaction...
        Hide
        Varun Sharma added a comment -

        I actually meant that we do this only for transactions which contain a mix of deletes and puts with overlaps like this one.

        Another way to fix this would be to put the responsibility on the client to break out the mutations and possibly add some documentation..

        Show
        Varun Sharma added a comment - I actually meant that we do this only for transactions which contain a mix of deletes and puts with overlaps like this one. Another way to fix this would be to put the responsibility on the client to break out the mutations and possibly add some documentation..
        Hide
        Andrew Purtell added a comment -

        Currently RowMutations should not mix Deletes that dominate some of the Puts because they will all be given the same timestamp if not otherwise specified by the client.

        I actually meant that we do this only for transactions which contain a mix of deletes and puts with overlaps like this one.

        The semantics could be changed that for all mutations in the RowMutation with timestamp == HConstants#LATEST_TIMESTAMP as the ops are applied with the row lock asserted, the mutation processor will substitute timestamps incremented by 1 at each op. (What happens now IIRC is the current time is snapshot into a long, then packed into a byte[], and reused to set the timestamp of every KV if its timestamp == HConstants#LATEST_TIMESTAMP.) Then if the client is trying to structure the RowMutation with Deletes ahead of Puts it would work as expected even if one of the Deletes may dominate one or more of the Puts. But if we make that change, then we have to make sure that any other op applied to the row(s) are not assigned a time in the past relative to those synthetic timestamps. Depending on the resolution of the system clock, "now" when the RowMutation is processed and "now" when the next RPC is serviced after the row locks are released could be the same, and both may address the same same row(s), leading to weird time travel behavior.

        Show
        Andrew Purtell added a comment - Currently RowMutations should not mix Deletes that dominate some of the Puts because they will all be given the same timestamp if not otherwise specified by the client. I actually meant that we do this only for transactions which contain a mix of deletes and puts with overlaps like this one. The semantics could be changed that for all mutations in the RowMutation with timestamp == HConstants#LATEST_TIMESTAMP as the ops are applied with the row lock asserted, the mutation processor will substitute timestamps incremented by 1 at each op. (What happens now IIRC is the current time is snapshot into a long, then packed into a byte[], and reused to set the timestamp of every KV if its timestamp == HConstants#LATEST_TIMESTAMP.) Then if the client is trying to structure the RowMutation with Deletes ahead of Puts it would work as expected even if one of the Deletes may dominate one or more of the Puts. But if we make that change, then we have to make sure that any other op applied to the row(s) are not assigned a time in the past relative to those synthetic timestamps. Depending on the resolution of the system clock, "now" when the RowMutation is processed and "now" when the next RPC is serviced after the row locks are released could be the same, and both may address the same same row(s), leading to weird time travel behavior.
        Hide
        Ted Yu added a comment -

        Here is draft patch including some of the tests Vinod provided.

        I didn't keep testPutRowThenDeleteSameRow from Vinod's test because I think the net effect of Put's followed by Delete would be Delete. This is something client can manage.

        TestRowMutations and TestAtomicOperation passed.

        Show
        Ted Yu added a comment - Here is draft patch including some of the tests Vinod provided. I didn't keep testPutRowThenDeleteSameRow from Vinod's test because I think the net effect of Put's followed by Delete would be Delete. This is something client can manage. TestRowMutations and TestAtomicOperation passed.
        Hide
        Ted Yu added a comment -

        Andy made some interesting observations.

        What can be done in next patch is to make HRegion.doProcessRowWithTimeout() return a map of byte[] to long, representing the timestamp assigned to the Puts (i.e. "now") per family. This way the next call to HRegion.doProcessRowWithTimeout() can avoid the time travel issue.

        Show
        Ted Yu added a comment - Andy made some interesting observations. What can be done in next patch is to make HRegion.doProcessRowWithTimeout() return a map of byte[] to long, representing the timestamp assigned to the Puts (i.e. "now") per family. This way the next call to HRegion.doProcessRowWithTimeout() can avoid the time travel issue.
        Hide
        Andrew Purtell added a comment -

        No.

        I was thinking something simple and data structure free like spinning for the system clock to sufficiently advance before releasing the lock(s), but am curious if one of the wizards here has a more imaginative idea.

        Show
        Andrew Purtell added a comment - No. I was thinking something simple and data structure free like spinning for the system clock to sufficiently advance before releasing the lock(s), but am curious if one of the wizards here has a more imaginative idea.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12584879/8626-v1.txt
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 3 new or modified tests.

        +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

        +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

        +1 javadoc. The javadoc tool did not generate any warning messages.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 lineLengths. The patch introduces lines longer than 100

        +1 site. The mvn site goal succeeds with this patch.

        +1 core tests. The patch passed unit tests in .

        Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//testReport/
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
        Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12584879/8626-v1.txt against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces lines longer than 100 +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5834//console This message is automatically generated.
        Hide
        Jean-Marc Spaggiari added a comment -

        Are we going to be break the compatibility with the previous version?

        I mean, if someone is using this today on is code knowing how it's working, that mean after this patch it will not work anymore, right?

        I'm not really convinced this is a bug. Why should we first to the deletes and not first do the puts...

        Show
        Jean-Marc Spaggiari added a comment - Are we going to be break the compatibility with the previous version? I mean, if someone is using this today on is code knowing how it's working, that mean after this patch it will not work anymore, right? I'm not really convinced this is a bug. Why should we first to the deletes and not first do the puts...
        Hide
        Andrew Purtell added a comment -

        I for one don't think this is a bug and have touched on some of the complexities involved if we think the requested semantics are a good idea. I'm -0.

        Show
        Andrew Purtell added a comment - I for one don't think this is a bug and have touched on some of the complexities involved if we think the requested semantics are a good idea. I'm -0.
        Hide
        Lars Hofhansl added a comment -

        -1 on changing this. This is working as designed.

        It does not make any logical sense to do an atomic Put and Delete to the same column. What does that even mean? You want both the Put and the Delete, but not really the Put, because the Delete should win.

        Will close as "Won't fix" unless I hear objections.

        Show
        Lars Hofhansl added a comment - -1 on changing this. This is working as designed. It does not make any logical sense to do an atomic Put and Delete to the same column. What does that even mean? You want both the Put and the Delete, but not really the Put, because the Delete should win. Will close as "Won't fix" unless I hear objections.
        Hide
        Vinod added a comment -

        I think this becomes relevant when I want to remove all columns in a column family followed by adding some new columns to the same column family atomically.

        Here is my original use-case which lead to this, please suggest how else I can achieve the same?

        I have a HBase (v0.94.7) table with a single column family and columns are added to it over time. These columns are named as the timestamp they were created, so unless I query the row I do not know what all columns it has.

        Now given a row, I want to atomically remove all the existing columns of this column family and add a new set of columns and values.

        So I thought of using HBase's RowMutations like:

        --------------
        RowMutations mutations = new RowMutations(row);

        //delete the column family
        Delete delete = new Delete(row);
        delete.deleteFamily(cf);

        //add new columns
        Put put = new Put(row);
        put.add(cf, col1, v1);
        put.add(cf, col2, v2);

        //delete column family and add new columns to same family
        mutations.add(delete);
        mutations.add(put);

        table.mutateRow(mutations);
        --------------

        But what this code ends up doing is just deleting the column family, it does not add the new columns to the column family.

        Show
        Vinod added a comment - I think this becomes relevant when I want to remove all columns in a column family followed by adding some new columns to the same column family atomically. Here is my original use-case which lead to this, please suggest how else I can achieve the same? I have a HBase (v0.94.7) table with a single column family and columns are added to it over time. These columns are named as the timestamp they were created, so unless I query the row I do not know what all columns it has. Now given a row, I want to atomically remove all the existing columns of this column family and add a new set of columns and values. So I thought of using HBase's RowMutations like: -------------- RowMutations mutations = new RowMutations(row); //delete the column family Delete delete = new Delete(row); delete.deleteFamily(cf); //add new columns Put put = new Put(row); put.add(cf, col1, v1); put.add(cf, col2, v2); //delete column family and add new columns to same family mutations.add(delete); mutations.add(put); table.mutateRow(mutations); -------------- But what this code ends up doing is just deleting the column family, it does not add the new columns to the column family.
        Hide
        Liang Xie added a comment -

        R vinod kumar, we(XiaoMi) had a similar scenario like yours, and we introduced a new DeteteFamilyVersion kv type and modified Delete/DeleteTracker/ScanDeleteTracker/ScanQueryMatcher, Honghua Feng.

        Show
        Liang Xie added a comment - R vinod kumar , we(XiaoMi) had a similar scenario like yours, and we introduced a new DeteteFamilyVersion kv type and modified Delete/DeleteTracker/ScanDeleteTracker/ScanQueryMatcher, Honghua Feng .
        Hide
        Vinod added a comment -

        In the use-case above, the client does not know upfront what all columns the row/column-family has. I guess this would be a common use-case in schema-free data stores like HBase.

        The client now gets a new copy of the entire row, essentially new data for that row which might not have all the columns the row has currently. So it needs to atomically replace the entire row with this new data.

        So one way I could think of is to use RowMutations to first delete entire column family and then Put the new columns to same column family.

        Another way would be to read the row first to figure out the current columns and then create a non-overlapping set of Puts and Deletes and add those to the RowMutations. But this a check-then-act scenario which can cause inconsistency. Also this causes multiple round trips to the server.

        Any other ways to address this use-case?

        Show
        Vinod added a comment - In the use-case above, the client does not know upfront what all columns the row/column-family has. I guess this would be a common use-case in schema-free data stores like HBase. The client now gets a new copy of the entire row, essentially new data for that row which might not have all the columns the row has currently. So it needs to atomically replace the entire row with this new data. So one way I could think of is to use RowMutations to first delete entire column family and then Put the new columns to same column family. Another way would be to read the row first to figure out the current columns and then create a non-overlapping set of Puts and Deletes and add those to the RowMutations. But this a check-then-act scenario which can cause inconsistency. Also this causes multiple round trips to the server. Any other ways to address this use-case?
        Hide
        Andrew Purtell added a comment -

        I read this as a request to change RowMutation semantics from a bundle of ops to apply atomically at the exact same time to a bundle of ops to apply atomically, with each op applied at a motonically increasing time, with row locks providing mutual exclusion. It's logical enough, then a client can structure the RM with a DeleteColumn first and Puts to the same row+column after, as described for the use case described here. I think that could be reasonable, but we should take care such that no ops outside the RowMutation bundle can have interleaving timestamps unless the client is providing them, and so is that complication worth it?

        Show
        Andrew Purtell added a comment - I read this as a request to change RowMutation semantics from a bundle of ops to apply atomically at the exact same time to a bundle of ops to apply atomically, with each op applied at a motonically increasing time, with row locks providing mutual exclusion. It's logical enough, then a client can structure the RM with a DeleteColumn first and Puts to the same row+column after, as described for the use case described here. I think that could be reasonable, but we should take care such that no ops outside the RowMutation bundle can have interleaving timestamps unless the client is providing them, and so is that complication worth it?
        Hide
        Ted Yu added a comment -

        In Vinod's example, only one column family was involved. However, we should consider multiple column families if the feature is supported.

        Delete delete1 = new Delete(row);
        delete1.deleteFamily(cf1);
        
        //add new columns
        Put put1 = new Put(row);
        put1.add(cf1, col1, v1);
        put1.add(cf1, col2, v2);
        
        Delete delete2 = new Delete(row);
        delete2.deleteFamily(cf2);
        
        //add new columns
        Put put2 = new Put(row);
        put2.add(cf2, col3, v3);
        put2.add(cf2, col4, v4);
        

        In the above case, only two distinct timestamps are needed t and t+1 where the Deletes carry t and Puts carry t+1.

        Show
        Ted Yu added a comment - In Vinod's example, only one column family was involved. However, we should consider multiple column families if the feature is supported. Delete delete1 = new Delete(row); delete1.deleteFamily(cf1); //add new columns Put put1 = new Put(row); put1.add(cf1, col1, v1); put1.add(cf1, col2, v2); Delete delete2 = new Delete(row); delete2.deleteFamily(cf2); //add new columns Put put2 = new Put(row); put2.add(cf2, col3, v3); put2.add(cf2, col4, v4); In the above case, only two distinct timestamps are needed t and t+1 where the Deletes carry t and Puts carry t+1.
        Hide
        Lars Hofhansl added a comment -

        The semantics of RowMutation are that all edits are applied in one MVCC snapshot and written to a single WALEdit. There are no assumptions about Put/Delete timestamps whatsoever.

        The client is free to set timestamps as desired. The Vinod's example above the Puts just need to have a timestamp higher than the Delete. The client can make that so.

            long now = System.currentTimeMillis();
            Delete delete = new Delete(row);
            delete.deleteFamily(cf1, now);
            
            Put put1 = new Put(row);
            put1.add(cf1,col1,now+1);
        

        Let's not make this more complicated that it has to be. I maintain my -1 on changing this.

        Show
        Lars Hofhansl added a comment - The semantics of RowMutation are that all edits are applied in one MVCC snapshot and written to a single WALEdit. There are no assumptions about Put/Delete timestamps whatsoever. The client is free to set timestamps as desired. The Vinod's example above the Puts just need to have a timestamp higher than the Delete. The client can make that so. long now = System .currentTimeMillis(); Delete delete = new Delete(row); delete.deleteFamily(cf1, now); Put put1 = new Put(row); put1.add(cf1,col1,now+1); Let's not make this more complicated that it has to be. I maintain my -1 on changing this.
        Hide
        Ted Yu added a comment -

        What if another client uses the following code where the value of now is the same as the value of now obtained above ?

            Delete delete = new Delete(row);
            delete.deleteFamily(cf1, now);
        
            Put put2 = new Put(row);
            put2.add(cf1, col2, now+1);
        

        Both put1 and put2 would go through, right ?

        Show
        Ted Yu added a comment - What if another client uses the following code where the value of now is the same as the value of now obtained above ? Delete delete = new Delete(row); delete.deleteFamily(cf1, now); Put put2 = new Put(row); put2.add(cf1, col2, now+1); Both put1 and put2 would go through, right ?
        Hide
        Ted Yu added a comment -

        Maybe we can introduce a special constant, Long.MIN_VALUE+1, e.g. which the user can use for Delete's in the mutations.
        HRegion would obtain now = System.currentTimeMillis() first.
        When HRegion#doMiniBatchMutation() sees this special constant, it would wait till System.currentTimeMillis() reaches now+1 (let's represent now+1 as variable t). The Delete's would be given timestamp equal to t while the Put's would receive timestamp now.

        Show
        Ted Yu added a comment - Maybe we can introduce a special constant, Long.MIN_VALUE+1, e.g. which the user can use for Delete's in the mutations. HRegion would obtain now = System.currentTimeMillis() first. When HRegion#doMiniBatchMutation() sees this special constant, it would wait till System.currentTimeMillis() reaches now+1 (let's represent now+1 as variable t). The Delete's would be given timestamp equal to t while the Put's would receive timestamp now.
        Hide
        Andrew Purtell added a comment - - edited

        I don't follow Ted. The more I read the less it makes sense to me. I think we would want a widely applicable change not something specific to one use case, only one kind of op ordering. Why are deletes special? Not clear why a magic constant is needed. There's already a -1 here. Lets resolve as invalid. I'll add a -1 too, IMO the discussion isn't productive now.

        Show
        Andrew Purtell added a comment - - edited I don't follow Ted. The more I read the less it makes sense to me. I think we would want a widely applicable change not something specific to one use case, only one kind of op ordering. Why are deletes special? Not clear why a magic constant is needed. There's already a -1 here. Lets resolve as invalid. I'll add a -1 too, IMO the discussion isn't productive now.
        Hide
        Lars Hofhansl added a comment -

        Thanks Andy. I concur.

        Vinod, does the approach above (where the client sets the TS) solve your problem?

        Show
        Lars Hofhansl added a comment - Thanks Andy. I concur. Vinod, does the approach above (where the client sets the TS) solve your problem?
        Hide
        Vinod added a comment -

        Lars, the approach you suggest does not work for my use-case because of the race-condition mentioned by Ted in his comment above at:
        https://issues.apache.org/jira/browse/HBASE-8626?focusedCommentId=13669464&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13669464

        Can we extend current RowMutations to support something like what Andrew stated in his previous comment here:
        https://issues.apache.org/jira/browse/HBASE-8626?focusedCommentId=13669152&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13669152

        That would solve my use-case perfectly.

        Show
        Vinod added a comment - Lars, the approach you suggest does not work for my use-case because of the race-condition mentioned by Ted in his comment above at: https://issues.apache.org/jira/browse/HBASE-8626?focusedCommentId=13669464&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13669464 Can we extend current RowMutations to support something like what Andrew stated in his previous comment here: https://issues.apache.org/jira/browse/HBASE-8626?focusedCommentId=13669152&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13669152 That would solve my use-case perfectly.
        Hide
        Lars Hofhansl added a comment -

        You can always have the races when multiple clients issue requests at the same time.

        Two clients could issue a request within the same ms (or in the case of the patch within 1 ms of each other) and the result would still be not what you want - basically there is no correct order if things happen "at the same time".

        The race Ted outline is not worth than the one you get with the patch (well the chance might be slightly reduce since writes to the same rows are serialized, so the chance is lessened that the region server does them in the same ms, but it can still happen).

        Show
        Lars Hofhansl added a comment - You can always have the races when multiple clients issue requests at the same time. Two clients could issue a request within the same ms (or in the case of the patch within 1 ms of each other) and the result would still be not what you want - basically there is no correct order if things happen "at the same time". The race Ted outline is not worth than the one you get with the patch (well the chance might be slightly reduce since writes to the same rows are serialized, so the chance is lessened that the region server does them in the same ms, but it can still happen).

          People

          • Assignee:
            Ted Yu
            Reporter:
            Vinod
          • Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development