Description
i write a code, that can extract main content of a html (usally weblogs).
this content usally apperas in <body><p> tag but there is no insurance. also might be multiple tags with form of <body><p> but only one of them is main content. this code first find body node, and then compute weight of childs nodes that compute based on text volume and height. so the code find lowest node that have maximum text volume.
i hope that improvement of this code cause to solutions to find fake or duplicated pages.
Attachments
Attachments
Issue Links
- is related to
-
NUTCH-961 Expose Tika's boilerpipe support
- Closed