Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Minor Minor
    • Resolution: Unresolved
    • Affects Version/s: 0.89.20100924
    • Fix Version/s: None
    • Component/s: util
    • Labels:
      None

      Description

      This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

      From the CDH User list:

      On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
      To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

      -Sean

      That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.

      import java.io.IOException;
      
      import org.apache.hadoop.conf.Configuration;
      import org.apache.hadoop.hbase.HBaseConfiguration;
      import org.apache.hadoop.hbase.client.HTable;
      import org.apache.hadoop.hbase.client.Delete;
      import org.apache.hadoop.hbase.client.Result;
      import org.apache.hadoop.hbase.client.MetaScanner;
      import org.apache.hadoop.hbase.util.Bytes;
      
      public class CleanFromMeta {
          public static class Cleaner implements MetaScanner.MetaScannerVisitor {
              public HTable meta = null;
              public Cleaner(Configuration conf) throws IOException {
                  meta = new HTable(conf, ".META.");
              }
      
              public boolean processRow(Result rowResult) throws IOException {
                  String r = new String(rowResult.getRow());
                  if (r.startsWith("webtable,")) {
                      meta.delete(new Delete(rowResult.getRow()));
                      System.out.println("Deleting row " + rowResult);
                  }
                  return true;
              }
          }
      
          public static void main(String[] args) throws Exception {
              String tname = ".META.";
              Configuration conf = HBaseConfiguration.create();
              MetaScanner.metaScan(conf, new Cleaner(conf), 
                   Bytes.toBytes("webtable"));
          }
      }
      

      I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?

        Activity

        Lars George created issue -
        Lars George made changes -
        Field Original Value New Value
        Description This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.

        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta {
            public static class Cleaner implements MetaScanner.MetaScannerVisitor {
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException {
                    meta = new HTable(conf, ".META.");
                }

                public boolean processRow(Result rowResult) throws IOException {
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) {
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    }
                    return true;
                }
            }

            public static void main(String[] args) throws Exception {
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf,
                                     new Cleaner(conf),
                                     Bytes.toBytes("webtable"));
            }
        }
        {quote}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.

        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta \{
            public static class Cleaner implements MetaScanner.MetaScannerVisitor \{
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException \{
                    meta = new HTable(conf, ".META.");
                \}

                public boolean processRow(Result rowResult) throws IOException \{
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) \{
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    \}
                    return true;
                \}
            \}

            public static void main(String[] args) throws Exception \{
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            \}
        \}
        {quote}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        Lars George made changes -
        Description This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.

        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta \{
            public static class Cleaner implements MetaScanner.MetaScannerVisitor \{
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException \{
                    meta = new HTable(conf, ".META.");
                \}

                public boolean processRow(Result rowResult) throws IOException \{
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) \{
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    \}
                    return true;
                \}
            \}

            public static void main(String[] args) throws Exception \{
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            \}
        \}
        {quote}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.
        {quote}

        {code}
        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta \{
            public static class Cleaner implements MetaScanner.MetaScannerVisitor \{
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException \{
                    meta = new HTable(conf, ".META.");
                \}

                public boolean processRow(Result rowResult) throws IOException \{
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) \{
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    \}
                    return true;
                \}
            \}

            public static void main(String[] args) throws Exception \{
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            \}
        \}
        {code}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        Lars George made changes -
        Description This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.
        {quote}

        {code}
        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta \{
            public static class Cleaner implements MetaScanner.MetaScannerVisitor \{
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException \{
                    meta = new HTable(conf, ".META.");
                \}

                public boolean processRow(Result rowResult) throws IOException \{
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) \{
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    \}
                    return true;
                \}
            \}

            public static void main(String[] args) throws Exception \{
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            \}
        \}
        {code}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.
        {quote}

        {code}
        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta {
            public static class Cleaner implements MetaScanner.MetaScannerVisitor {
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException {
                    meta = new HTable(conf, ".META.");
                }

                public boolean processRow(Result rowResult) throws IOException {
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) {
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    }
                    return true;
                }
            }

            public static void main(String[] args) throws Exception {
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            }
        }
        {code}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        Lars George made changes -
        Description This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.
        {quote}

        {code}
        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta {
            public static class Cleaner implements MetaScanner.MetaScannerVisitor {
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException {
                    meta = new HTable(conf, ".META.");
                }

                public boolean processRow(Result rowResult) throws IOException {
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) {
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    }
                    return true;
                }
            }

            public static void main(String[] args) throws Exception {
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            }
        }
        {code}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "--disable-table <tablename> --force" or so? But since disable is already in the shell we could add an "--force" there? Or add a "--delete-table <tablename>" to the hbck?
        This happened before and I am not sure how the new Master improves on it (this stuff is only available between the lines are buried in some peoples heads - one other thing I wish was for a better place to communicate what each path improves). Just so we do not miss it, there is an issue that sometimes disabling large tables simply times out and the table gets stuck in limbo.

        From the CDH User list:

        {quote}
        On Fri, Jan 7, 2011 at 1:57 PM, Sean Sechrist <ssechrist@gmail.com> wrote:
        To get them out of META, you can just scan '.META.' for that table name, and delete those rows. We had to do that a few months ago.

        -Sean


        That did it. For the benefit of others, here's code. Beware the literal table names, run at your own peril.
        {quote}

        {code}
        import java.io.IOException;

        import org.apache.hadoop.conf.Configuration;
        import org.apache.hadoop.hbase.HBaseConfiguration;
        import org.apache.hadoop.hbase.client.HTable;
        import org.apache.hadoop.hbase.client.Delete;
        import org.apache.hadoop.hbase.client.Result;
        import org.apache.hadoop.hbase.client.MetaScanner;
        import org.apache.hadoop.hbase.util.Bytes;

        public class CleanFromMeta {
            public static class Cleaner implements MetaScanner.MetaScannerVisitor {
                public HTable meta = null;
                public Cleaner(Configuration conf) throws IOException {
                    meta = new HTable(conf, ".META.");
                }

                public boolean processRow(Result rowResult) throws IOException {
                    String r = new String(rowResult.getRow());
                    if (r.startsWith("webtable,")) {
                        meta.delete(new Delete(rowResult.getRow()));
                        System.out.println("Deleting row " + rowResult);
                    }
                    return true;
                }
            }

            public static void main(String[] args) throws Exception {
                String tname = ".META.";
                Configuration conf = HBaseConfiguration.create();
                MetaScanner.metaScan(conf, new Cleaner(conf),
                     Bytes.toBytes("webtable"));
            }
        }
        {code}

        I suggest to move this into HBaseFsck. I do not like personally to have these JRuby scripts floating around that may or may not help. This should be available if a user gets stuck and knows what he is doing (they can delete from .META. anyways). Maybe a "\-\-disable-table <tablename> \-\-force" or so? But since disable is already in the shell we could add an "\-\-force" there? Or add a "\-\-delete-table <tablename>" to the hbck?
        stack made changes -
        Fix Version/s 0.92.0 [ 12314223 ]

          People

          • Assignee:
            Unassigned
            Reporter:
            Lars George
          • Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Created:
              Updated:

              Development