Uploaded image for project: 'Ignite'
  1. Ignite
  2. IGNITE-13178

Spark job gets stuck indefinitely while trying to fetch data from ignite cluster using thin client

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • cache, clients, thin client
    • None
    • Docs Required, Release Notes Required

    Description

      We are trying to use ignite as in-memory distributed cache and put data inside cache using spark job.

      We tried using thin client to fetch data from cache.

      // we are using ThreadLocal to stop creating too many client instances.

      private static final ThreadLocal<IgniteClient> igniteClientContext = new ThreadLocal<>();
       

      //Thin client creation

      public static IgniteClient getIgniteClient(String[] address) { if(igniteClientContext.get() == null) { ClientConfiguration clientConfig = null; if(cfg == null)

      { clientConfig = new ClientConfiguration().setAddresses(address); }

      else { clientConfig = cfg; } IgniteClient igniteClient = Ignition.startClient(clientConfig); logger.info("igniteClient initialized "); igniteClientContext.set(igniteClient); } return igniteClientContext.get(); }

       

      From spark code, I'm trying to create instance of ignite thin client and create cache object.

       

      val address = config.igniteServers.split(",") // config.igniteServers ="10.xx.xxx.xxx:10800,10.xx.xx.xxx:10800"

      {{}}

      Below code will be called from spark executor. We will be processing set or records in each executor and we are only reading data from cache and comparing with currently processing record. If it is already present in cache, we will ignore otherwise we will consume it.

      {{}}

       

      {{val cacheCfg = new ClientCacheConfiguration()
      .setName(PNR_CACHE)
      .setCacheMode(CacheMode.REPLICATED)
      .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC)
      .setDefaultLockTimeout(30000)
      val igniteClient = IgniteHelper.getIgniteClient(address)
      val cache : ClientCache[Long, Boolean] = igniteClient.getOrCreateCache(cacheCfg);}}

      {{}}

      Job is running fine for couple of hours and it gets stuck with below exception indefinitely.

      {{}}

       

      {{org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hdpct2ldap01g02.hadoop.sgdcprod.XXXX.com/10.xx.xx.xx,port=10800,localport=20214]]
      at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:499)
      at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:491)
      at org.apache.ignite.internal.client.thin.TcpClientChannel.access$100(TcpClientChannel.java:92)
      at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:538)
      at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.readInt(TcpClientChannel.java:572)
      at org.apache.ignite.internal.client.thin.TcpClientChannel.processNextResponse(TcpClientChannel.java:272)
      at org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:234)
      at org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:171)
      at org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:160)
      at org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:187)
      at org.apache.ignite.internal.client.thin.TcpIgniteClient.getOrCreateCache(TcpIgniteClient.java:124)
      at com.XXXX.eda.pnr.PnrApplication$$anonfun$2$$anonfun$apply$4.apply(PnrApplication.scala:305)
      at com.XXXX.eda.pnr.PnrApplication$$anonfun$2$$anonfun$apply$4.apply(PnrApplication.scala:297)
      at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
      at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:217)
      at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
      at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
      at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
      at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
      at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
      at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
      at org.apache.spark.scheduler.Task.run(Task.scala:109)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:381)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)
      Caused by: java.net.SocketException: Connection timed out (Read failed)
      at java.net.SocketInputStream.socketRead0(Native Method)
      at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
      at java.net.SocketInputStream.read(SocketInputStream.java:171)
      at java.net.SocketInputStream.read(SocketInputStream.java:141)
      at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:535)}}

       

      stackoverflow link :

      https://stackoverflow.com/questions/62531478/spark-job-gets-stuck-indefinitely-while-trying-to-fetch-data-from-ignite-cluster

       

       

      In Threaddump, we see main root cause w.r.t SocketConnection.

       

      java.net.SocketInputStream.socketRead0(Native Method)
      java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
      java.net.SocketInputStream.read(SocketInputStream.java:171)
      java.net.SocketInputStream.read(SocketInputStream.java:141)
      org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:535)
      org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.readInt(TcpClientChannel.java:572)
      org.apache.ignite.internal.client.thin.TcpClientChannel.processNextResponse(TcpClientChannel.java:272)
      org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:234)
      org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:171)
      org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:160)
      org.apache.ignite.internal.client.thin.ReliableChannel.affinityService(ReliableChannel.java:222)
      org.apache.ignite.internal.client.thin.TcpClientCache.cacheSingleKeyOperation(TcpClientCache.java:509)
      org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:111)
      com.XXXX.eda.pnr.PnrApplication$$anonfun$2$$anonfun$apply$4.apply(PnrApplication.scala:322)
      com.XXXX.eda.pnr.PnrApplication$$anonfun$2$$anonfun$apply$4.apply(PnrApplication.scala:299)
      scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
      org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:217)
      org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1094)
      org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1085)
      org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1020)
      org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1085)
      org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:811)
      org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
      org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
      org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
      org.apache.spark.scheduler.Task.run(Task.scala:109)
      org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:381)
      java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      java.lang.Thread.run(Thread.java:748)

       

      We are setting defaultReadTimeout in spark.properties file. But it is not getting timedout correctly.

      spark.executor.extraJavaOptions=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseG1GC -Dsun.net.client.defaultReadTimeout:300000 -Dsun .net.client.defaultConnectTimeout=300000 -DIGNITE_REST_START_ON_CLIENT=true spark.driver.exetraJavaOptions=-Dsun.net.client.defaultReadTimeout:300000 -Dsun.net.client.defaultConnectTimeout=300000 -DIGNITE_REST_START_ON_CLIENT=true

      Attachments

        1. IgniteHelper.java
          5 kB
          Ameer Basha Pattan

        Activity

          People

            Unassigned Unassigned
            pameer402 Ameer Basha Pattan
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: