Currently, Spark Thrift Server accumulates data in its appcache, even for old queries. This fills up the disk (using over 100GB per worker node) within days, and the only way to clear it is to restart the Thrift Server application. Even deleting the files directly isn't a solution, as Spark then complains about FileNotFound.
I asked about this on Stack Overflow a few weeks ago, but it does not seem to be currently doable by configuration.
Am I missing some configuration option, or some other factor here?
Otherwise, can anyone point me to the code that handles this, so maybe I can try my hand at a fix?