Uploaded image for project: 'ManifoldCF'
  1. ManifoldCF
  2. CONNECTORS-1364

Better bin naming in the Shared Drive Connector



    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • ManifoldCF 1.9
    • ManifoldCF 2.7
    • JCIFS connector
    • None


      Hello and happy new year!

      Bin naming in the Shared Drive Connector makes assumptions that are not always valid.

      As I understand it, Manifold uses bins to prevent overloading data sources. In the SDC, server name is designated as bin name. All jobs created against a particular server will be treated as one unit when documents are prioritised, which can severely disadvantage some jobs (e.g. late starters).
      Moreover, this is incompatible with some common enterprise server topologies. In Windows DFS, which is widely used in large enterprises, what the SDC thinks of as a server name, isn’t actually a physical resource. It’s a namespace that can span many servers and shares. In this case, it doesn’t make sense to throttle simply on the root ‘server’ name. In other environments, a powerful storage server can be more than capable of handling high crawl load; overzealous throttling can end up limiting/hurting Manifold’s performance there.

      I’m struggling to find a single solution that fits all so I’m leaning towards passing in to the repo connection config some sort of server topology flag or throttling depth flag as a hint that ShareDriveConnector#getBinNames can use to decide whether the bin name should be server, server+share or server+share+root_folder. Share and root_folder would need to be explicitly passed in the repo config too or extracted from the documentIdentifier arg in getBinNames (assuming it's reliable).



        1. CONNECTORS-1364.git.patch
          18 kB
          Aeham Abushwashi
        2. CONNECTORS-1364.git.v2.patch
          13 kB
          Aeham Abushwashi



            kwright@metacarta.com Karl Wright
            aeham.abushwashi Aeham Abushwashi
            1 Vote for this issue
            3 Start watching this issue