I've only looked at a handful of files that contain xfa...this metadata is entirely new to me. The files I've looked at come from govdocs1 and are fairly old by now.
In the attached 041617_filled_out.pdf, I've added content to the forms and saved the document.
With the patch, I'm getting all of the boilerplate from the xfa extraction, but I'm not getting any content from the form because it isn't in <(speak|text|exData)> elements. However, with our old code, I am seeing the entered data, e.g. my_exhibitor.
Is this PDF storing the contents of the form in both the xfa and in the traditional AcroForm?
I imagine that won't happen in all PDFs, and there will be an either/or?
To avoid duplication of content, do we want to skip processing of AcroForm data if XFA exists? Will we miss anything?
The other major question: I like the narrow focus that the current regexes yield, but why wouldn't we want to run our HtmlParser or our DcXMLParser against the bytes and pull everything out? We'd have to skip inline/embedded images or handle those properly at some point...but any other reasons?
Tilman Hausherr, have you worked with XFA? Any recommendations for pulling as much info as we can without duplication?
We could make this configurable, of course.