SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/services/software/2.0.0-44/spark-2.4.1.2.0.0-33-bin/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/services/software/2.0.0-44/hadoop-2.7.5.2.0.0-5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/08/23 08:39:53 INFO util.SignalUtils: Registered signal handler for TERM 19/08/23 08:39:53 INFO util.SignalUtils: Registered signal handler for HUP 19/08/23 08:39:53 INFO util.SignalUtils: Registered signal handler for INT 19/08/23 08:39:53 WARN util.Utils: Your hostname, localhost resolves to a loopback address: 127.0.0.1; using 10.1.22.3 instead (on interface eth0) 19/08/23 08:39:53 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address 19/08/23 08:39:53 INFO spark.SecurityManager: Changing view acls to: hadoop 19/08/23 08:39:53 INFO spark.SecurityManager: Changing modify acls to: hadoop 19/08/23 08:39:53 INFO spark.SecurityManager: Changing view acls groups to: 19/08/23 08:39:53 INFO spark.SecurityManager: Changing modify acls groups to: 19/08/23 08:39:53 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 19/08/23 08:39:54 INFO yarn.ApplicationMaster: Preparing Local resources 19/08/23 08:39:54 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1566517991256_0004_000001 19/08/23 08:39:54 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread 19/08/23 08:39:54 INFO yarn.ApplicationMaster: Waiting for spark context initialization... 19/08/23 08:39:54 INFO driver.RSCDriver: Connecting to: 10.1.22.11:30000 19/08/23 08:39:54 INFO driver.RSCDriver: Starting RPC server... 19/08/23 08:39:54 INFO rpc.RpcServer: Connected to the port 30000 19/08/23 08:39:54 WARN rsc.RSCConf: Your hostname, localhost, resolves to a loopback address; using 10.1.22.3 instead (on interface eth0) 19/08/23 08:39:54 WARN rsc.RSCConf: Set 'livy.rsc.rpc.server.address' if you need to bind to another address. 19/08/23 08:39:55 INFO driver.RSCDriver: Received job request 0a5342fb-87ae-45f2-b0e7-f26bd262a982 19/08/23 08:39:55 INFO driver.RSCDriver: SparkContext not yet up, queueing job request. 19/08/23 08:39:57 INFO driver.SparkEntries: Starting Spark context... 19/08/23 08:39:57 INFO spark.SparkContext: Running Spark version 2.4.1.2.0.0-33 19/08/23 08:39:57 INFO spark.SparkContext: Submitted application: livy-session-1 19/08/23 08:39:57 INFO spark.SecurityManager: Changing view acls to: hadoop 19/08/23 08:39:57 INFO spark.SecurityManager: Changing modify acls to: hadoop 19/08/23 08:39:57 INFO spark.SecurityManager: Changing view acls groups to: 19/08/23 08:39:57 INFO spark.SecurityManager: Changing modify acls groups to: 19/08/23 08:39:57 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 19/08/23 08:39:57 INFO util.Utils: Successfully started service 'sparkDriver' on port 34260. 19/08/23 08:39:57 INFO spark.SparkEnv: Registering MapOutputTracker 19/08/23 08:39:57 INFO spark.SparkEnv: Registering BlockManagerMaster 19/08/23 08:39:57 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/08/23 08:39:57 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/08/23 08:39:57 INFO storage.DiskBlockManager: Created local directory at /mnt-b/hadoop/data/nm/usercache/hadoop/appcache/application_1566517991256_0004/blockmgr-42f738ab-52ef-41b8-9898-779b4e250920 19/08/23 08:39:57 INFO storage.DiskBlockManager: Created local directory at /mnt-c/hadoop/data/nm/usercache/hadoop/appcache/application_1566517991256_0004/blockmgr-37bb0780-d756-4860-a1e6-0096b3cf4bee 19/08/23 08:39:57 INFO memory.MemoryStore: MemoryStore started with capacity 4.1 GB 19/08/23 08:39:57 INFO spark.SparkEnv: Registering OutputCommitCoordinator 19/08/23 08:39:57 INFO util.log: Logging initialized @4969ms 19/08/23 08:39:57 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, /jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, /stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, /storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, /executors/json, /executors/threadDump, /executors/threadDump/json, /static, /, /api, /jobs/job/kill, /stages/stage/kill. 19/08/23 08:39:57 INFO server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 19/08/23 08:39:57 INFO server.Server: Started @5042ms 19/08/23 08:39:57 INFO server.AbstractConnector: Started ServerConnector@7d29cc18{HTTP/1.1,[http/1.1]}{0.0.0.0:58654} 19/08/23 08:39:57 INFO util.Utils: Successfully started service 'SparkUI' on port 58654. 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61142076{/jobs,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2a93eb82{/jobs/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@644184c3{/jobs/job,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5fefa63e{/jobs/job/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6a1599ba{/stages,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@29a6fa7{/stages/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@317903b9{/stages/stage,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@42db43e{/stages/stage/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7db00bc9{/stages/pool,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@426593fc{/stages/pool/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4f00053b{/storage,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2665c4a1{/storage/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5b767de7{/storage/rdd,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@35e2ff06{/storage/rdd/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a840d5c{/environment,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@724c8e61{/environment/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a3d4c8a{/executors,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@38059a04{/executors/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@62b19eb5{/executors/threadDump,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74182b04{/executors/threadDump/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4b17011{/static,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2a7a8643{/,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1dc4c66c{/api,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@35b9125c{/jobs/job/kill,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26a4ded3{/stages/stage/kill,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.1.22.3:58654 19/08/23 08:39:58 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler 19/08/23 08:39:58 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1566517991256_0004 and attemptId Some(appattempt_1566517991256_0004_000001) 19/08/23 08:39:58 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs. 19/08/23 08:39:58 INFO util.Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/08/23 08:39:58 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50412. 19/08/23 08:39:58 INFO netty.NettyBlockTransferService: Server created on 10.1.22.3:50412 19/08/23 08:39:58 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/08/23 08:39:58 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.22.3, 50412, None) 19/08/23 08:39:58 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.1.22.3:50412 with 4.1 GB RAM, BlockManagerId(driver, 10.1.22.3, 50412, None) 19/08/23 08:39:58 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.22.3, 50412, None) 19/08/23 08:39:58 INFO storage.BlockManager: external shuffle service port = 7337 19/08/23 08:39:58 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.22.3, 50412, None) 19/08/23 08:39:58 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json. 19/08/23 08:39:58 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a405b18{/metrics/json,null,AVAILABLE,@Spark} 19/08/23 08:39:58 INFO scheduler.EventLoggingListener: Logging events to hdfs://10.1.22.10:9000/spark-history/application_1566517991256_0004_1.lz4 19/08/23 08:39:58 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs. 19/08/23 08:39:58 INFO util.Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/08/23 08:39:58 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 19/08/23 08:39:58 INFO client.RMProxy: Connecting to ResourceManager at /10.1.22.10:8030 19/08/23 08:39:58 INFO yarn.YarnRMClient: Registering the ApplicationMaster 19/08/23 08:39:58 INFO yarn.ApplicationMaster: =============================================================================== YARN executor launch context: env: CLASSPATH -> /opt/sparkling/sparklingExtraLibs/cosn/cos_hadoop_api-5.2.6.jar:/opt/sparkling/sparklingExtraLibs/cosn/hadoop-cos-2.7.5.jar{{PWD}}{{PWD}}/__spark_conf__{{PWD}}/__spark_libs__/*/opt/sparkling/spark/jars/*:/opt/sparkling/sparklingExtraLibs/cosn/cos_hadoop_api-5.2.6.jar:/opt/sparkling/sparklingExtraLibs/cosn/hadoop-cos-2.7.5.jar$HADOOP_CONF_DIR$HADOOP_COMMON_HOME/share/hadoop/common/*$HADOOP_COMMON_HOME/share/hadoop/common/lib/*$HADOOP_HDFS_HOME/share/hadoop/hdfs/*$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*$HADOOP_YARN_HOME/share/hadoop/yarn/*$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*{{PWD}}/__spark_conf__/__hadoop_conf__ SPARK_YARN_STAGING_DIR -> hdfs://10.1.22.10:9000/user/hadoop/.sparkStaging/application_1566517991256_0004 SPARK_USER -> hadoop SPARK_HOME -> /opt/sparkling/spark PYTHONPATH -> {{PWD}}/pyspark.zip{{PWD}}/py4j-0.10.7-src.zip{{PWD}}/pyspark.zip{{PWD}}/py4j-0.10.7-src.zip command: {{JAVA_HOME}}/bin/java \ -server \ -Xmx10240m \ -Djava.io.tmpdir={{PWD}}/tmp \ '-Dspark.port.maxRetries=16' \ '-Dspark.network.timeout=600s' \ '-Dspark.driver.port=34260' \ '-Dspark.history.ui.port=18080' \ '-Dspark.rpc.numRetries=10' \ '-Dspark.shuffle.service.port=7337' \ '-Dspark.ui.port=0' \ -Dspark.yarn.app.container.log.dir= \ -XX:OnOutOfMemoryError='kill %p' \ org.apache.spark.executor.CoarseGrainedExecutorBackend \ --driver-url \ spark://CoarseGrainedScheduler@10.1.22.3:34260 \ --executor-id \ \ --hostname \ \ --cores \ 2 \ --app-id \ application_1566517991256_0004 \ --user-class-path \ file:$PWD/__app__.jar \ --user-class-path \ file:$PWD/netty-all-4.1.17.Final.jar \ --user-class-path \ file:$PWD/livy-api-0.6.0.2.0.0-38-incubating.jar \ --user-class-path \ file:$PWD/livy-rsc-0.6.0.2.0.0-38-incubating.jar \ --user-class-path \ file:$PWD/livy-repl_2.11-0.6.0.2.0.0-38-incubating.jar \ --user-class-path \ file:$PWD/livy-core_2.11-0.6.0.2.0.0-38-incubating.jar \ --user-class-path \ file:$PWD/commons-codec-1.9.jar \ --user-class-path \ file:$PWD/datanucleus-core-3.2.10.jar \ --user-class-path \ file:$PWD/datanucleus-rdbms-3.2.9.jar \ --user-class-path \ file:$PWD/datanucleus-api-jdo-3.2.6.jar \ 1>/stdout \ 2>/stderr resources: netty-all-4.1.17.Final.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/netty-all-4.1.17.Final.jar" } size: 3780056 timestamp: 1566520789368 type: FILE visibility: PRIVATE datanucleus-rdbms-3.2.9.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/datanucleus-rdbms-3.2.9.jar" } size: 1809447 timestamp: 1566520789629 type: FILE visibility: PRIVATE livy-core_2.11-0.6.0.2.0.0-38-incubating.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/livy-core_2.11-0.6.0.2.0.0-38-incubating.jar" } size: 95762 timestamp: 1566520789516 type: FILE visibility: PRIVATE livy-repl_2.11-0.6.0.2.0.0-38-incubating.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/livy-repl_2.11-0.6.0.2.0.0-38-incubating.jar" } size: 1004584 timestamp: 1566520789487 type: FILE visibility: PRIVATE datanucleus-api-jdo-3.2.6.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/datanucleus-api-jdo-3.2.6.jar" } size: 339666 timestamp: 1566520789662 type: FILE visibility: PRIVATE livy-rsc-0.6.0.2.0.0-38-incubating.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/livy-rsc-0.6.0.2.0.0-38-incubating.jar" } size: 498457 timestamp: 1566520789454 type: FILE visibility: PRIVATE datanucleus-core-3.2.10.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/datanucleus-core-3.2.10.jar" } size: 1890075 timestamp: 1566520789590 type: FILE visibility: PRIVATE __spark_conf__ -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/__spark_conf__.zip" } size: 202898 timestamp: 1566520789998 type: ARCHIVE visibility: PRIVATE sparkr -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/sparkr.zip" } size: 1628998 timestamp: 1566520789740 type: ARCHIVE visibility: PRIVATE commons-codec-1.9.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/commons-codec-1.9.jar" } size: 263965 timestamp: 1566520789548 type: FILE visibility: PRIVATE pyspark.zip -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/pyspark.zip" } size: 591331 timestamp: 1566520789770 type: FILE visibility: PRIVATE livy-api-0.6.0.2.0.0-38-incubating.jar -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/livy-api-0.6.0.2.0.0-38-incubating.jar" } size: 14181 timestamp: 1566520789426 type: FILE visibility: PRIVATE hive-site.xml -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/hive-site.xml" } size: 1647 timestamp: 1566520789693 type: FILE visibility: PRIVATE py4j-0.10.7-src.zip -> resource { scheme: "hdfs" host: "10.1.22.10" port: 9000 file: "/user/hadoop/.sparkStaging/application_1566517991256_0004/py4j-0.10.7-src.zip" } size: 42437 timestamp: 1566520789811 type: FILE visibility: PRIVATE =============================================================================== 19/08/23 08:39:58 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs. 19/08/23 08:39:58 INFO util.Utils: Using initial executors = 1, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/08/23 08:39:58 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@10.1.22.3:34260) 19/08/23 08:39:58 INFO yarn.YarnAllocator: Will request 1 executor container(s), each with 2 core(s) and 12288 MB memory (including 2048 MB of overhead) 19/08/23 08:39:58 INFO yarn.YarnAllocator: Submitted 1 unlocalized container requests. 19/08/23 08:39:58 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 19/08/23 08:39:59 INFO impl.AMRMClientImpl: Received new token for : 10.1.22.3:44435 19/08/23 08:39:59 INFO yarn.YarnAllocator: Launching container container_1566517991256_0004_01_000003 on host 10.1.22.3 for executor with ID 1 19/08/23 08:39:59 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/23 08:39:59 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/23 08:39:59 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 10.1.22.3:44435 19/08/23 08:40:01 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.22.3:36294) with ID 1 19/08/23 08:40:01 INFO spark.ExecutorAllocationManager: New executor 1 has registered (new total is 1) 19/08/23 08:40:01 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 19/08/23 08:40:01 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done 19/08/23 08:40:01 INFO driver.SparkEntries: Spark context finished initialization in 3990ms 19/08/23 08:40:01 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.1.22.3:48028 with 5.2 GB RAM, BlockManagerId(1, 10.1.22.3, 48028, None) 19/08/23 08:40:01 INFO driver.SparkEntries: Created Spark session (with Hive support). 19/08/23 08:40:11 INFO spark.ExecutorAllocationManager: Request to remove executorIds: 1