Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
Description
In https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/AclControlManager.java#L143 we loop through the ACL filters and and add RemoveAccessControlEntryRecord records to the response list for each matching ACL. I think there's a bug where if two filters match the same ACL, we create two RemoveAccessControlEntryRecord records for that same ACL. This is an issue because upon replay we throw an exception (https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/controller/AclControlManager.java#L195) if the ACL is not in the in-memory data structures which will happen to the second RemoveAccessControlEntryRecord.
Maybe we can just de-dupe both List<AclDeleteResult> and List<ApiMessageAndVersion>? I think something like (just showing code for ApiMessageAndVersion):
private List<ApiMessageAndVersion> deDupeApiMessageAndVersion(List<ApiMessageAndVersion> messages) { return new HashSet<>(messages).stream().collect(Collectors.toList()); }
should suffice as I don't think the ordering matters within the list of response objects.
Attachments
Issue Links
- links to