Details
-
Improvement
-
Status: Resolved
-
Low
-
Resolution: Fixed
-
None
-
Performance
-
Low Hanging Fruit
-
All
-
None
-
Description
Every normal write request will go through this method(sendToHintedReplicas). However, the list:backPressureHosts in the method has never been used functionally.
The backpressure was introduced by:
Support optional backpressure strategies at the coordinator
patch by Sergio Bossa; reviewed by Stefania Alborghetti for CASSANDRA-9318
d43b9ce5 Sergio Bossa <sergio.bossa@gmail.com> on 2016/9/19 at 10:42 AM
public static void sendToHintedEndpoints(final Mutation mutation, Iterable<InetAddress> targets, AbstractWriteResponseHandler<IMutation> responseHandler, String localDataCenter, Stage stage) throws OverloadedException { int targetsSize = Iterables.size(targets); // this dc replicas: Collection<InetAddress> localDc = null; // extra-datacenter replicas, grouped by dc Map<String, Collection<InetAddress>> dcGroups = null; // only need to create a Message for non-local writes MessageOut<Mutation> message = null; boolean insertLocal = false; ArrayList<InetAddress> endpointsToHint = null; List<InetAddress> backPressureHosts = null; for (InetAddress destination : targets) { checkHintOverload(destination); if (FailureDetector.instance.isAlive(destination)) { if (canDoLocalRequest(destination)) { insertLocal = true; } else { // belongs on a different server if (message == null) message = mutation.createMessage(); String dc = DatabaseDescriptor.getEndpointSnitch().getDatacenter(destination); // direct writes to local DC or old Cassandra versions // (1.1 knows how to forward old-style String message IDs; updated to int in 2.0) if (localDataCenter.equals(dc)) { if (localDc == null) localDc = new ArrayList<>(targetsSize); localDc.add(destination); } else { Collection<InetAddress> messages = (dcGroups != null) ? dcGroups.get(dc) : null; if (messages == null) { messages = new ArrayList<>(3); // most DCs will have <= 3 replicas if (dcGroups == null) dcGroups = new HashMap<>(); dcGroups.put(dc, messages); } messages.add(destination); } if (backPressureHosts == null) backPressureHosts = new ArrayList<>(targetsSize); backPressureHosts.add(destination); } } else { if (shouldHint(destination)) { if (endpointsToHint == null) endpointsToHint = new ArrayList<>(targetsSize); endpointsToHint.add(destination); } } } if (backPressureHosts != null) MessagingService.instance().applyBackPressure(backPressureHosts, responseHandler.currentTimeout()); if (endpointsToHint != null) submitHint(mutation, endpointsToHint, responseHandler); if (insertLocal) performLocally(stage, Optional.of(mutation), mutation::apply, responseHandler); if (localDc != null) { for (InetAddress destination : localDc) MessagingService.instance().sendRR(message, destination, responseHandler, true); } if (dcGroups != null) { // for each datacenter, send the message to one node to relay the write to other replicas for (Collection<InetAddress> dcTargets : dcGroups.values()) sendMessagesToNonlocalDC(message, dcTargets, responseHandler); } }
Now the backPressure related codes had been deleted in the codebase, and here maybe someone forgot to remove the collection: backPressureHosts. Removing it will save every write request to add a few items to a list to reduce the memory footprint