Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Won't Fix
    • Affects Version/s: 1.0.0
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      In many situations it is necessary to store more data associated with pages than it's possible now with the current segment format. Quite often it's a binary data. There are two common workarounds for this: one is to use per-page metadata, either in Content or ParseData, the other is to use an external independent database using page ID-s as foreign keys.

      Currently segments can consist of the following predefined parts: content, crawl_fetch, crawl_generate, crawl_parse, parse_text and parse_data. I propose a third option, which is a natural extension of this existing segment format, i.e. to introduce the ability to add arbitrarily named segment "parts", with the only requirement that they should be MapFile-s that store Writable keys and values. Alternatively, we could define a SegmentPart.Writer/Reader to accommodate even more sophisticated scenarios.

      Existing segment API and searcher API (NutchBean, DistributedSearch Client/Server) should be extended to handle such arbitrary parts.

      Example applications:

      • storing HTML previews of non-HTML pages, such as PDF, PS and Office documents
      • storing pre-tokenized version of plain text for faster snippet generation
      • storing linguistically tagged text for sophisticated data mining
      • storing image thumbnails

      etc, etc ...

      I'm going to prepare a patchset shortly. Any comments and suggestions are welcome.

      1. segmentparts.patch
        29 kB
        Andrzej Bialecki
      2. ParseFilters.java
        3 kB
        Andrzej Bialecki

        Activity

        Hide
        Enis Soztutar added a comment -

        This patch will indeed resolve many issues related to storing extra information about the crawl. IMO MapFiles will do the job.
        Searcher API can be extended with an interface with a method like

        <E extends Writable> getInfo(<T extends Writable>);

        The implementing class should have a map of Class to MapFiles.

        Show
        Enis Soztutar added a comment - This patch will indeed resolve many issues related to storing extra information about the crawl. IMO MapFiles will do the job. Searcher API can be extended with an interface with a method like <E extends Writable> getInfo(<T extends Writable>); The implementing class should have a map of Class to MapFiles.
        Hide
        Andrzej Bialecki added a comment -

        Minor nit: MapFile requires that the key is a WritableComparable.

        I'm not sure I understand the last part of your comment .. There may be many parts that use the same key/value classes in MapFiles. I think the API should select the part by name (String) or some other ID, with a map of byte ID-s to directory names (this is to avoid excessive overhead during RPC). Regarding the implementing classes - I think we should use the plugin model, with a registry of segment parts that are active for the current configuration.

        Show
        Andrzej Bialecki added a comment - Minor nit: MapFile requires that the key is a WritableComparable. I'm not sure I understand the last part of your comment .. There may be many parts that use the same key/value classes in MapFiles. I think the API should select the part by name (String) or some other ID, with a map of byte ID-s to directory names (this is to avoid excessive overhead during RPC). Regarding the implementing classes - I think we should use the plugin model, with a registry of segment parts that are active for the current configuration.
        Hide
        Enis Soztutar added a comment -

        >> There may be many parts that use the same key/value classes in MapFiles.

        Yes indeed you are right. I haven't thought about several parts having the same classes.

        >> I think the API should select the part by name (String) or some other ID, with a map of byte ID-s to directory names

        I thought that the map will be from class names to directory names.

        >>I think we should use the plugin model, with a registry of segment parts that are active for the current configuration

        Do you think that we sould also move HitDetailer, HitSummarizer, HitContent and Searcher to this plugin system. And should we break the multiple functionality in NutchBean and DistributedSearch$Client, and allow for separate index, segment servers?

        Show
        Enis Soztutar added a comment - >> There may be many parts that use the same key/value classes in MapFiles. Yes indeed you are right. I haven't thought about several parts having the same classes. >> I think the API should select the part by name (String) or some other ID, with a map of byte ID-s to directory names I thought that the map will be from class names to directory names. >>I think we should use the plugin model, with a registry of segment parts that are active for the current configuration Do you think that we sould also move HitDetailer, HitSummarizer, HitContent and Searcher to this plugin system. And should we break the multiple functionality in NutchBean and DistributedSearch$Client, and allow for separate index, segment servers?
        Hide
        Andrzej Bialecki added a comment -

        > I thought that the map will be from class names to directory names.

        Well, then you would have to pass the whole class name in an RPC call - I think we should come up with a way that uses at most one byte to select the right part.

        > Do you think that we sould also move HitDetailer, HitSummarizer, HitContent and Searcher to this plugin system

        Yes, that was my plan - the same way we did it with indexing plugins - although I intend to create a separate issue regarding the use of separate index / page / summary servers, to avoid complicating this patch too much..

        Show
        Andrzej Bialecki added a comment - > I thought that the map will be from class names to directory names. Well, then you would have to pass the whole class name in an RPC call - I think we should come up with a way that uses at most one byte to select the right part. > Do you think that we sould also move HitDetailer, HitSummarizer, HitContent and Searcher to this plugin system Yes, that was my plan - the same way we did it with indexing plugins - although I intend to create a separate issue regarding the use of separate index / page / summary servers, to avoid complicating this patch too much..
        Hide
        Andrzej Bialecki added a comment -

        This patch contains the following modifications and additions:

        • a new extension point, ParseFilter, to post-process results of parsing just before they are written out.
        • related chained filter facade, ParseFilters.
        • Parse / ParseImpl changes to support passing of arbitrary data records "tagged" by Text keys.
        • ParseOutputFormat changes to store such arbitrary data in MapFile-s named after the Text keys.
        • NutchBean and DistributedSearch protocol changes to support retrieving records from named segment parts.
        • PdfParser changes to support storing of HTML preview for PDF files.

        Please review and comment.

        Show
        Andrzej Bialecki added a comment - This patch contains the following modifications and additions: a new extension point, ParseFilter, to post-process results of parsing just before they are written out. related chained filter facade, ParseFilters. Parse / ParseImpl changes to support passing of arbitrary data records "tagged" by Text keys. ParseOutputFormat changes to store such arbitrary data in MapFile-s named after the Text keys. NutchBean and DistributedSearch protocol changes to support retrieving records from named segment parts. PdfParser changes to support storing of HTML preview for PDF files. Please review and comment.
        Hide
        Doğacan Güney added a comment -

        I skimmed through it and it looks awesome. I will try to test it better later, but it seems patch is missing ParseFilters class.

        Show
        Doğacan Güney added a comment - I skimmed through it and it looks awesome. I will try to test it better later, but it seems patch is missing ParseFilters class.
        Hide
        Andrzej Bialecki added a comment -

        Add missing file.

        Show
        Andrzej Bialecki added a comment - Add missing file.
        Hide
        Doğacan Güney added a comment - - edited

        I still haven't tested it yet, but the code looks solid. I have a couple of comments, though:

        • One can't define order of execution for ParseFilter-s. It seems we always need it in one way or another in filters so it may be good to just add ordering and be done with it.
        • ParseFilters.filter method throws IOException. I think it will be better if it throws a ParseFilterException or whatever, keeping in spirit with IndexingFilters -> IndexingException and ScoringFilters -> ScoringFilterException.
        • There are few uses of iterating over Map.keySet() then getting the value with Map.get(key). FindBugs suggests that it is better to iterate over Map.entrySet() in these cases.
        • When someone requests more than 1 part-data, we start a couple of threads, receive data and join threads. Nutch also does this for summary. Is starting and joining threads again and again a problem? Especially, if you are clustering you may end up starting and joining 100 threads for each query. Perhaps a thread pool? This is not completely related to this patch, it is just something that bugs me.
        • I just realized that there is no ParseFilter class either
        Show
        Doğacan Güney added a comment - - edited I still haven't tested it yet, but the code looks solid. I have a couple of comments, though: One can't define order of execution for ParseFilter-s. It seems we always need it in one way or another in filters so it may be good to just add ordering and be done with it. ParseFilters.filter method throws IOException. I think it will be better if it throws a ParseFilterException or whatever, keeping in spirit with IndexingFilters -> IndexingException and ScoringFilters -> ScoringFilterException. There are few uses of iterating over Map.keySet() then getting the value with Map.get(key). FindBugs suggests that it is better to iterate over Map.entrySet() in these cases. When someone requests more than 1 part-data, we start a couple of threads, receive data and join threads. Nutch also does this for summary. Is starting and joining threads again and again a problem? Especially, if you are clustering you may end up starting and joining 100 threads for each query. Perhaps a thread pool? This is not completely related to this patch, it is just something that bugs me. I just realized that there is no ParseFilter class either
        Show
        Markus Jelsma added a comment - Bulk close of legacy issues: http://www.lucidimagination.com/search/document/2738eeb014805854/clean_up_open_legacy_issues_in_jira

          People

          • Assignee:
            Andrzej Bialecki
            Reporter:
            Andrzej Bialecki
          • Votes:
            2 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development