Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.15
-
None
-
None
Description
As noticed from the latest common crawl work, some url-hosted HTML files are being detected as text/plain then specialised out to their programming language url extension
This is caused broken XML in the HTML, and by us having dropped the magic priority of HTML to 40 (below that of XML), to avoid it matching for HTML-containing other types like emails. Because these files have broken XML (eg an empty encoding on the xml tag), the XML root extractor doesn't run, and they get downmixed to text plain then specialised by filename