Currently the Bulk Delete, first clears the cache and then does physical deletion from Janus of the required types. At the end of this operation it cycles through the attributes of the deleted types and for those attributes for which it is possible to have indexes, built-ins, enums and structs, marks the indexes as deleted. Since for each attribute the routine goes back to the type of attribute using the typeRegistry, that is the cache, if this attribute is a custom type the attribute was deleted before and the bulk delete partially fails.
In particular, for each attribute in which this error does not occur, for example because built-in type, a label delete is added to the property key with the number of occurrences with which this property key is present in the store. Something like property_key_deleted_0. In fact, if you try to reload the same types and re-delete, the property keys are now marked as property_key_deleted_1.
If, however, for the aforementioned reason the search for the type in the cache fails the property key is not marked and remains fully active, which, from what I can understand at this level of my knowledge, generates at least some MEMORY LEAKs.
If I reload the types with the same entities and attributes it seems to be all right, but if I change some attributes, in my opinion, the result is unpredictable.
This path tested on version 2.0 and SNAPSHOT 3.0 would resolve this issue.
In attachment there is a small example that shows this behavior with and without this patch