Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-7615

Data isn't written to the disk with enough space while using multiple data_file_directories

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Normal
    • Resolution: Duplicate
    • None
    • None
    • None
    • Hardware: AWS c3.xlarge (80GB SSD) with 500GB EBS
      OS: Ubuntu
      Cassandra version: 1.2.16

    • Normal

    Description

      Background: The local disk on the server was running out of disk space, so I added a new volume and I added a new data directory (on the new disk) to the data_file_directories list.

      Behavior: I could see that some of the new sstables were flushed to this new data directory, but Cassandra was still compacting some large sstables to the old disk (at least the tmp sstable was written to the old disk. I am sure whether it would move the tmp sstable later or not); eventually, it crashed with out-of-disk-space. It is not what CASSANDRA-4292 described, unless the design decision had been changed.

      Suspect: It seems the precedence of the tasks number is somewhat over the free space of the volume, so it would choose less-load-and-less-space over higher-load-and-more-space ( ? )

              Collections.sort(candidates);
      
              // sort directories by load, in _ascending_ order.
              Collections.sort(candidates, new Comparator<DataDirectory>()
              {
                  public int compare(DataDirectory a, DataDirectory b)
                  {
                      return a.currentTasks.get() - b.currentTasks.get();
                  }
              });
      

      Thanks in advance.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              davychia David Chia
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: