Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-2356

Exception: Could not locate executable null\bin\winutils.exe in the Hadoop

    Details

    • Type: Bug
    • Status: Open
    • Priority: Critical
    • Resolution: Unresolved
    • Affects Version/s: 1.0.0, 1.1.1, 1.2.1, 1.2.2, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2
    • Fix Version/s: None
    • Component/s: Windows
    • Labels:
      None

      Description

      I'm trying to run some transformation on Spark, it works fine on cluster (YARN, linux machines). However, when I'm trying to run it on local machine (Windows 7) under unit test, I got errors (I don't use Hadoop, I'm read file from local filesystem):

      14/07/02 19:59:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      14/07/02 19:59:31 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
      java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
      	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
      	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
      	at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
      	at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
      	at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
      	at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
      	at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
      	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
      	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
      	at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
      	at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
      	at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
      	at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
      	at org.apache.spark.SparkContext.<init>(SparkContext.scala:97)
      

      It's happened because Hadoop config is initialized each time when spark context is created regardless is hadoop required or not.

      I propose to add some special flag to indicate if hadoop config is required (or start this configuration manually)

        Issue Links

          Activity

          Hide
          srowen Sean Owen added a comment -

          This isn't specific to Spark: http://stackoverflow.com/questions/19620642/failed-to-locate-the-winutils-binary-in-the-hadoop-binary-path

          And if you look at when this code is called in SparkContext, it's from the hadoopRDD() method. You will certainly end up using Hadoop code if your code access Hadoop functionality, so I think it is behaving as expected.

          Show
          srowen Sean Owen added a comment - This isn't specific to Spark: http://stackoverflow.com/questions/19620642/failed-to-locate-the-winutils-binary-in-the-hadoop-binary-path And if you look at when this code is called in SparkContext, it's from the hadoopRDD() method. You will certainly end up using Hadoop code if your code access Hadoop functionality, so I think it is behaving as expected.
          Hide
          kostiantyn Kostiantyn Kudriavtsev added a comment -

          No Sean,
          check stack trace carefully, the exception is caused by SparkContext.<init> (constructor of SparkContext), where do you see hadoopRDD at all?

          Show
          kostiantyn Kostiantyn Kudriavtsev added a comment - No Sean, check stack trace carefully, the exception is caused by SparkContext.<init> (constructor of SparkContext), where do you see hadoopRDD at all?
          Hide
          srowen Sean Owen added a comment -

          Yeah you are right, on a closer look, this is coming from this bit in SparkContext's constructor, not when it is later accessed in hadoopRDD():

            val hadoopConfiguration: Configuration = {
              val hadoopConf = SparkHadoopUtil.get.newConfiguration()
          

          So it gets triggered no matter what when you instantiate SparkContext.

          This could be made lazy. But I see other things in SparkContext end up using it, like the EventLogger, so it would get evaluated pretty quickly even when not calling hadoopRDD().

          I am not sure whether the resolution be that, well, Spark just uses the Hadoop APIs a lot and so you'd have to make sure Hadoop libraries can work properly on the platform, or, whether it's at all possible to tease these apart enough so that SparkContext doesn't touch this part of Hadoop unless it has to.

          Show
          srowen Sean Owen added a comment - Yeah you are right, on a closer look, this is coming from this bit in SparkContext's constructor, not when it is later accessed in hadoopRDD(): val hadoopConfiguration: Configuration = { val hadoopConf = SparkHadoopUtil.get.newConfiguration() So it gets triggered no matter what when you instantiate SparkContext. This could be made lazy. But I see other things in SparkContext end up using it, like the EventLogger, so it would get evaluated pretty quickly even when not calling hadoopRDD(). I am not sure whether the resolution be that, well, Spark just uses the Hadoop APIs a lot and so you'd have to make sure Hadoop libraries can work properly on the platform, or, whether it's at all possible to tease these apart enough so that SparkContext doesn't touch this part of Hadoop unless it has to.
          Hide
          kostiantyn Kostiantyn Kudriavtsev added a comment -

          and the use case when I got this exception - I didn't touch hadoop at all
          My code works only with local files, not HDFS! It was very strange to stuck in this kind of issue.
          I believe, it must be marked as critical and fixed asap!

          Show
          kostiantyn Kostiantyn Kudriavtsev added a comment - and the use case when I got this exception - I didn't touch hadoop at all My code works only with local files, not HDFS! It was very strange to stuck in this kind of issue. I believe, it must be marked as critical and fixed asap!
          Hide
          tnabil Tarek Nabil added a comment -

          Are there any workarounds for this issue? I'm not even able to run the first example from the quick start guide.

          Show
          tnabil Tarek Nabil added a comment - Are there any workarounds for this issue? I'm not even able to run the first example from the quick start guide.
          Hide
          gq Guoqiang Li added a comment -

          This should be problems caused by not set HADOOP_HOME or hadoop.home.dir.

          Show
          gq Guoqiang Li added a comment - This should be problems caused by not set HADOOP_HOME or hadoop.home.dir.
          Hide
          tnabil Tarek Nabil added a comment -

          Yes, but the whole point is that you should do not need Hadoop at all.

          Show
          tnabil Tarek Nabil added a comment - Yes, but the whole point is that you should do not need Hadoop at all.
          Hide
          kostiantyn Kostiantyn Kudriavtsev added a comment -

          Guoqiang, Spark works not exclusively with Hadoop, but can live absolutely out of Hadoop cluster/environment. So, it's obvious that these two variables might be not set.

          Show
          kostiantyn Kostiantyn Kudriavtsev added a comment - Guoqiang, Spark works not exclusively with Hadoop, but can live absolutely out of Hadoop cluster/environment. So, it's obvious that these two variables might be not set.
          Hide
          tnabil Tarek Nabil added a comment -

          There is a nice workaround documented here: http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7

          It worked for me.

          Show
          tnabil Tarek Nabil added a comment - There is a nice workaround documented here: http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7 It worked for me.
          Hide
          rusanu Remus Rusanu added a comment - - edited

          HADOOP-11003 is requesting hadoop-common to reduce the severity of the error logged in this case. The error is raised, but getWinUtilsPath() catches it and logs the stack with error severity. Your code should not see the exception.

          Show
          rusanu Remus Rusanu added a comment - - edited HADOOP-11003 is requesting hadoop-common to reduce the severity of the error logged in this case. The error is raised, but getWinUtilsPath() catches it and logs the stack with error severity. Your code should not see the exception.
          Hide
          luca_venturini Luca Venturini added a comment -

          This error occurs also within the spark.shell and mllib examples, where the execution simply stops.

          The abovementioned workaround works by setting a windows environment variable called HADOOP_HOME .

          Show
          luca_venturini Luca Venturini added a comment - This error occurs also within the spark.shell and mllib examples, where the execution simply stops. The abovementioned workaround works by setting a windows environment variable called HADOOP_HOME .
          Hide
          vjapache vijay added a comment -

          This is how I worked around this in Windows:

          • Download and extract https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master
          • Modify bin\spark-class2.cmd and add the hadoop.home.dir system property:
            if not [%SPARK_SUBMIT_BOOTSTRAP_DRIVER%] == [] (
              set SPARK_CLASS=1
              "%RUNNER%" -Dhadoop.home.dir=C:\code\hadoop-common-2.2.0-bin-master org.apache.spark.deploy.SparkSubmitDriverBootstrapper %BOOTSTRAP_ARGS%
            ) else (
              "%RUNNER%" -Dhadoop.home.dir=C:\code\hadoop-common-2.2.0-bin-master -cp "%CLASSPATH%" %JAVA_OPTS% %*
            )
            

          That being said, this is a workaround for what I consider a critical bug (if spark indeed is meant to support windows).

          Show
          vjapache vijay added a comment - This is how I worked around this in Windows: Download and extract https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master Modify bin\spark-class2.cmd and add the hadoop.home.dir system property: if not [%SPARK_SUBMIT_BOOTSTRAP_DRIVER%] == [] ( set SPARK_CLASS=1 "%RUNNER%" -Dhadoop.home.dir=C:\code\hadoop-common-2.2.0-bin-master org.apache.spark.deploy.SparkSubmitDriverBootstrapper %BOOTSTRAP_ARGS% ) else ( "%RUNNER%" -Dhadoop.home.dir=C:\code\hadoop-common-2.2.0-bin-master -cp "%CLASSPATH%" %JAVA_OPTS% %* ) That being said, this is a workaround for what I consider a critical bug (if spark indeed is meant to support windows).
          Hide
          dvohra DeepakVohra added a comment -

          Following error gets generated on Windows with master url as "local" for KMeans clustering. But the application completes without any other error.

          java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop

          binaries.
          at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
          at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
          at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
          at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
          at org.apache.hadoop.mapred.FileInputFormat.setInputPaths

          (FileInputFormat.java:362)
          at org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:696)
          at org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:696)
          at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply

          (HadoopRDD.scala:170)
          at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply

          (HadoopRDD.scala:170)
          at scala.Option.map(Option.scala:145)
          at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:170)
          at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
          at scala.Option.getOrElse(Option.scala:120)
          at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
          at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
          at scala.Option.getOrElse(Option.scala:120)
          at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
          at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
          at scala.Option.getOrElse(Option.scala:120)
          at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
          at org.apache.spark.rdd.ZippedPartitionsBaseRDD.getPartitions

          (ZippedPartitionsRDD.scala:55)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
          at scala.Option.getOrElse(Option.scala:120)
          at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
          at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
          at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
          at scala.Option.getOrElse(Option.scala:120)
          at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
          at org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
          at org.apache.spark.rdd.RDD.count(RDD.scala:910)
          at org.apache.spark.rdd.RDD.takeSample(RDD.scala:403)
          at org.apache.spark.mllib.clustering.KMeans.initKMeansParallel

          (KMeans.scala:277)
          at org.apache.spark.mllib.clustering.KMeans.runAlgorithm(KMeans.scala:155)
          at org.apache.spark.mllib.clustering.KMeans.run(KMeans.scala:132)
          at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:352)
          at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:362)
          at org.apache.spark.mllib.clustering.KMeans.train(KMeans.scala)
          at kmeans.KMeansClusterer.main(KMeansClusterer.java:40)

          Show
          dvohra DeepakVohra added a comment - Following error gets generated on Windows with master url as "local" for KMeans clustering. But the application completes without any other error. java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths (FileInputFormat.java:362) at org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:696) at org.apache.spark.SparkContext$$anonfun$26.apply(SparkContext.scala:696) at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply (HadoopRDD.scala:170) at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply (HadoopRDD.scala:170) at scala.Option.map(Option.scala:145) at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:170) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at org.apache.spark.rdd.ZippedPartitionsBaseRDD.getPartitions (ZippedPartitionsRDD.scala:55) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1328) at org.apache.spark.rdd.RDD.count(RDD.scala:910) at org.apache.spark.rdd.RDD.takeSample(RDD.scala:403) at org.apache.spark.mllib.clustering.KMeans.initKMeansParallel (KMeans.scala:277) at org.apache.spark.mllib.clustering.KMeans.runAlgorithm(KMeans.scala:155) at org.apache.spark.mllib.clustering.KMeans.run(KMeans.scala:132) at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:352) at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:362) at org.apache.spark.mllib.clustering.KMeans.train(KMeans.scala) at kmeans.KMeansClusterer.main(KMeansClusterer.java:40)
          Hide
          srowen Sean Owen added a comment -

          The short answer is that you need to set HADOOP_CONF_DIR even when not using Hadoop. But it's still kind of a bug. It only affects Windows, which has other problems.

          Show
          srowen Sean Owen added a comment - The short answer is that you need to set HADOOP_CONF_DIR even when not using Hadoop. But it's still kind of a bug. It only affects Windows, which has other problems.
          Hide
          dvohra DeepakVohra added a comment -

          Thanks Sean.

          HADOOP_CONF_DIR shouldn't be required to be set if Hadoop is not used.

          Hadoop doesn't even get installed on Windows.

          Show
          dvohra DeepakVohra added a comment - Thanks Sean. HADOOP_CONF_DIR shouldn't be required to be set if Hadoop is not used. Hadoop doesn't even get installed on Windows.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          It's coming from {{ UserGroupInformation.setConfiguration(conf)}}; UGI is using Hadoop's StringUtils to do something, which then init's a static variable

          public static final Pattern ENV_VAR_PATTERN = Shell.WINDOWS ?    WIN_ENV_VAR_PATTERN : SHELL_ENV_VAR_PATTERN;
          

          And Hadoop utils shell, does some stuff in its constructor, which depends on winutils.exe being on the path.

          convoluted, but there you go. HADOOP-11293 proposes factoring out the Shell.Windows code into something standalone...if that can be pushed into Hadoop 2.8 then this problem will go away from then on

          Show
          stevel@apache.org Steve Loughran added a comment - It's coming from {{ UserGroupInformation.setConfiguration(conf)}}; UGI is using Hadoop's StringUtils to do something, which then init's a static variable public static final Pattern ENV_VAR_PATTERN = Shell.WINDOWS ? WIN_ENV_VAR_PATTERN : SHELL_ENV_VAR_PATTERN; And Hadoop utils shell, does some stuff in its constructor, which depends on winutils.exe being on the path. convoluted, but there you go. HADOOP-11293 proposes factoring out the Shell.Windows code into something standalone...if that can be pushed into Hadoop 2.8 then this problem will go away from then on
          Hide
          avulanov Alexander Ulanov added a comment -

          The following worked for me:
          Download http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe and put it to DISK:\FOLDERS\bin\
          Set HADOOP_CONF=DISK:\FOLDERS

          Show
          avulanov Alexander Ulanov added a comment - The following worked for me: Download http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe and put it to DISK:\FOLDERS\bin\ Set HADOOP_CONF=DISK:\FOLDERS
          Hide
          asflucas Lucas Partridge added a comment - - edited

          Neither HADOOP_CONF nor HADOOP_CONF_DIR worked for me. I had to do this instead (I'm using Spark 1.3.0 on Windows 7):
          set HADOOP_HOME=DISK:\FOLDERS

          Show
          asflucas Lucas Partridge added a comment - - edited Neither HADOOP_CONF nor HADOOP_CONF_DIR worked for me. I had to do this instead (I'm using Spark 1.3.0 on Windows 7): set HADOOP_HOME=DISK:\FOLDERS
          Hide
          bing zhengbing li added a comment -

          wintuils.exe from "http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe" is for window7(64 bit). I use windows 7(32 bit), so I follow the instruction in "http://vbashur.blogspot.com/2015/03/apache-spark-checkpoint-issue-on-windows.html" and download winutils.exe from "https://code.google.com/p/rrd-hadoop-win32/source/checkout".

          Show
          bing zhengbing li added a comment - wintuils.exe from "http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe" is for window7(64 bit). I use windows 7(32 bit), so I follow the instruction in "http://vbashur.blogspot.com/2015/03/apache-spark-checkpoint-issue-on-windows.html" and download winutils.exe from "https://code.google.com/p/rrd-hadoop-win32/source/checkout".
          Hide
          swapan Swapan Golla added a comment -

          Same issue for me. I am on Win7/64bit and using Spark 1.4.1. I had to set HADOOP_HOME parameter along with copying Hortonwork winutils.exe in the HADOOP_HOME/bin folder and it worked for me.

          Show
          swapan Swapan Golla added a comment - Same issue for me. I am on Win7/64bit and using Spark 1.4.1. I had to set HADOOP_HOME parameter along with copying Hortonwork winutils.exe in the HADOOP_HOME/bin folder and it worked for me.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          The original JIRA here is just that there's an error being printed out; in that specific example it is just noise. You can set the log in log4j to tell it not to log anything in org.apache.hadoop.util.Shell and you won't see this text. The other issues people are finding are actual problems; Hadoop and the libraries underneath are trying to load WINUTILS.EXE for real work -and failing

          Show
          stevel@apache.org Steve Loughran added a comment - The original JIRA here is just that there's an error being printed out; in that specific example it is just noise. You can set the log in log4j to tell it not to log anything in org.apache.hadoop.util.Shell and you won't see this text. The other issues people are finding are actual problems; Hadoop and the libraries underneath are trying to load WINUTILS.EXE for real work -and failing
          Hide
          michael_han Michael Han added a comment -

          I follows this work around to fix this issue:
          http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7

          Show
          michael_han Michael Han added a comment - I follows this work around to fix this issue: http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I've stuck up binaries compatible with Hadoop 2.6 & 2.7, to make installing things easier

          Note also Hadoop 2.8 includes HADOOP-10775, "fail with meaningful messages if winutils can't be found"

          Show
          stevel@apache.org Steve Loughran added a comment - I've stuck up binaries compatible with Hadoop 2.6 & 2.7, to make installing things easier https://github.com/steveloughran/winutils Note also Hadoop 2.8 includes HADOOP-10775 , "fail with meaningful messages if winutils can't be found"
          Hide
          michael_han Michael Han added a comment - - edited

          Hello Everyone,

          I encounter this issue today again when I tried to create a cluster using two windows 7 (64) desktop.
          This errors happens when I register the second worker to the master using the following command:
          spark-class org.apache.spark.deploy.worker.Worker spark://masternode:7077

          Strange it works fine when I register the first worker to the master.
          anyone knows some work around to fix this issue?
          The above work around works fine when I using local mode.
          Since I registered one worker successfully in the cluster, but when run spark-submit in the successfully worker, it also throw this exception.
          I google the entire internet and never seen any body has the experience to deploy a windows spark cluster successfully without hadoop, I have a demo in later days so hope anyone can help me on this thank you. Otherwise I have to run vmwares....

          I tried to set the HADOOP_HOME = C:\winutil in the env variables, but it doesn't work.
          The error is:
          Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
          15/12/14 16:49:22 WARN NativeCodeLoader: Unable to load native-hadoop library fo
          r your platform... using builtin-java classes where applicable
          15/12/14 16:49:22 ERROR Shell: Failed to locate the winutils binary in the hadoo
          p binary path
          java.io.IOException: Could not locate executable null\bin\winutils.exe in the Ha
          doop binaries.
          at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
          at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
          at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
          at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
          at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104)

          at org.apache.hadoop.security.Groups.<init>(Groups.java:86)
          at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
          at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Group
          s.java:280)
          at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
          nformation.java:271)
          at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
          rGroupInformation.java:248)
          at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(
          UserGroupInformation.java:763)
          at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGrou
          pInformation.java:748)
          at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGr
          oupInformation.java:621)
          at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
          .scala:2091)
          at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
          .scala:2091)
          at scala.Option.getOrElse(Option.scala:120)
          at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2091)
          at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:212)
          at org.apache.spark.deploy.worker.Worker$.startRpcEnvAndEndpoint(Worker.
          scala:692)
          at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:674)
          at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
          15/12/14 16:49:22 INFO SecurityManager: Changing view acls to: mh6
          15/12/14 16:49:22 INFO SecurityManager: Changing modify acls to: mh6
          15/12/14 16:49:22 INFO SecurityManager: SecurityManager: authentication disabled
          ; ui acls disabled; users with view permissions: Set(mh6); users with modify per
          missions: Set(mh6)
          15/12/14 16:49:23 INFO Slf4jLogger: Slf4jLogger started
          15/12/14 16:49:23 INFO Remoting: Starting remoting
          15/12/14 16:49:24 INFO Remoting: Remoting started; listening on addresses :[akka
          .tcp://sparkWorker@167.3.129.160:46862]
          15/12/14 16:49:24 INFO Utils: Successfully started service 'sparkWorker' on port
          46862.
          15/12/14 16:49:24 INFO Worker: Starting Spark worker 167.3.129.160:46862 with 4
          cores, 2.9 GB RAM
          15/12/14 16:49:24 INFO Worker: Running Spark version 1.5.2
          15/12/14 16:49:24 INFO Worker: Spark home: C:\spark-1.5.2-bin-hadoop2.6\bin\..
          15/12/14 16:49:24 INFO Utils: Successfully started service 'WorkerUI' on port 80
          81.
          15/12/14 16:49:24 INFO WorkerWebUI: Started WorkerWebUI at http://167.3.129.160:
          8081
          15/12/14 16:49:24 INFO Worker: Connecting to master 192.168.79.1:7077...
          15/12/14 16:49:39 INFO Worker: Retrying connection to master (attempt # 1)
          15/12/14 16:49:39 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thr
          ead Thread[sparkWorker-akka.actor.default-dispatcher-2,5,main]
          java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.Futur
          eTask@3ef5e68c rejected from java.util.concurrent.ThreadPoolExecutor@741cb720[Ru
          nning, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]

          at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution
          (ThreadPoolExecutor.java:2047)
          at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.jav
          a:823)
          at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.ja
          va:1369)
          at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorS
          ervice.java:112)
          at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo
          y$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211)
          at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo
          y$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210)
          at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike
          .scala:244)
          at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike
          .scala:244)
          at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimize
          d.scala:33)
          at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
          at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)

          at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
          at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$
          Worker$$tryRegisterAllMasters(Worker.scala:210)
          at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo
          y$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288)
          at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
          at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$
          Worker$$reregisterWithMaster(Worker.scala:234)
          at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(
          Worker.scala:521)
          at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRp
          cEnv$$processMessage(AkkaRpcEnv.scala:177)
          at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1
          $$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaR
          pcEnv.scala:126)
          at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRp
          cEnv$$safelyCall(AkkaRpcEnv.scala:197)
          at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1
          $$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
          at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractP
          artialFunction.scala:33)
          at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
          nction.scala:33)
          at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
          nction.scala:25)
          at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.s
          cala:59)
          at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.s
          cala:42)
          at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
          at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogRec
          eive.scala:42)
          at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
          at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1
          $$anon$1.aroundReceive(AkkaRpcEnv.scala:92)
          at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
          at akka.actor.ActorCell.invoke(ActorCell.scala:487)
          at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
          at akka.dispatch.Mailbox.run(Mailbox.scala:220)
          at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst
          ractDispatcher.scala:397)
          at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
          at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool
          .java:1339)
          at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19
          79)
          at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre
          ad.java:107)
          15/12/14 16:49:39 INFO ShutdownHookManager: Shutdown hook called

          Show
          michael_han Michael Han added a comment - - edited Hello Everyone, I encounter this issue today again when I tried to create a cluster using two windows 7 (64) desktop. This errors happens when I register the second worker to the master using the following command: spark-class org.apache.spark.deploy.worker.Worker spark://masternode:7077 Strange it works fine when I register the first worker to the master. anyone knows some work around to fix this issue? The above work around works fine when I using local mode. Since I registered one worker successfully in the cluster, but when run spark-submit in the successfully worker, it also throw this exception. I google the entire internet and never seen any body has the experience to deploy a windows spark cluster successfully without hadoop, I have a demo in later days so hope anyone can help me on this thank you. Otherwise I have to run vmwares.... I tried to set the HADOOP_HOME = C:\winutil in the env variables, but it doesn't work. The error is: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 15/12/14 16:49:22 WARN NativeCodeLoader: Unable to load native-hadoop library fo r your platform... using builtin-java classes where applicable 15/12/14 16:49:22 ERROR Shell: Failed to locate the winutils binary in the hadoo p binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Ha doop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79) at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104) at org.apache.hadoop.security.Groups.<init>(Groups.java:86) at org.apache.hadoop.security.Groups.<init>(Groups.java:66) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Group s.java:280) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI nformation.java:271) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use rGroupInformation.java:248) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject( UserGroupInformation.java:763) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGrou pInformation.java:748) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGr oupInformation.java:621) at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils .scala:2091) at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils .scala:2091) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2091) at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:212) at org.apache.spark.deploy.worker.Worker$.startRpcEnvAndEndpoint(Worker. scala:692) at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:674) at org.apache.spark.deploy.worker.Worker.main(Worker.scala) 15/12/14 16:49:22 INFO SecurityManager: Changing view acls to: mh6 15/12/14 16:49:22 INFO SecurityManager: Changing modify acls to: mh6 15/12/14 16:49:22 INFO SecurityManager: SecurityManager: authentication disabled ; ui acls disabled; users with view permissions: Set(mh6); users with modify per missions: Set(mh6) 15/12/14 16:49:23 INFO Slf4jLogger: Slf4jLogger started 15/12/14 16:49:23 INFO Remoting: Starting remoting 15/12/14 16:49:24 INFO Remoting: Remoting started; listening on addresses :[akka .tcp://sparkWorker@167.3.129.160:46862] 15/12/14 16:49:24 INFO Utils: Successfully started service 'sparkWorker' on port 46862. 15/12/14 16:49:24 INFO Worker: Starting Spark worker 167.3.129.160:46862 with 4 cores, 2.9 GB RAM 15/12/14 16:49:24 INFO Worker: Running Spark version 1.5.2 15/12/14 16:49:24 INFO Worker: Spark home: C:\spark-1.5.2-bin-hadoop2.6\bin\.. 15/12/14 16:49:24 INFO Utils: Successfully started service 'WorkerUI' on port 80 81. 15/12/14 16:49:24 INFO WorkerWebUI: Started WorkerWebUI at http://167.3.129.160: 8081 15/12/14 16:49:24 INFO Worker: Connecting to master 192.168.79.1:7077... 15/12/14 16:49:39 INFO Worker: Retrying connection to master (attempt # 1) 15/12/14 16:49:39 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thr ead Thread [sparkWorker-akka.actor.default-dispatcher-2,5,main] java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.Futur eTask@3ef5e68c rejected from java.util.concurrent.ThreadPoolExecutor@741cb720[Ru nning, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution (ThreadPoolExecutor.java:2047) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.jav a:823) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.ja va:1369) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorS ervice.java:112) at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo y$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211) at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo y$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike .scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike .scala:244) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimize d.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$ Worker$$tryRegisterAllMasters(Worker.scala:210) at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo y$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288) at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119) at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$ Worker$$reregisterWithMaster(Worker.scala:234) at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse( Worker.scala:521) at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRp cEnv$$processMessage(AkkaRpcEnv.scala:177) at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1 $$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaR pcEnv.scala:126) at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRp cEnv$$safelyCall(AkkaRpcEnv.scala:197) at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1 $$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractP artialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu nction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu nction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.s cala:59) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.s cala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogRec eive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:467) at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1 $$anon$1.aroundReceive(AkkaRpcEnv.scala:92) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst ractDispatcher.scala:397) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool .java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19 79) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre ad.java:107) 15/12/14 16:49:39 INFO ShutdownHookManager: Shutdown hook called
          Hide
          shankar_dh shankar added a comment -

          I have followed this solution – https://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7 but still not able to resolve this issue
          17/01/05 22:40:36 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50594 (size: 9.8 KB, free: 1043.2 MB)
          17/01/05 22:40:36 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:12
          17/01/05 22:40:36 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
          java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
          at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)

          I am running simple spark program in Scala IDE on windows 7, I don't have hadoop installed on my windows.
          I followed below steps but still why I am failing?
          1. I have copied winutils.exe to C:\winutil\bin
          from https://social.msdn.microsoft.com/forums/azure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
          2. I have set my environment variable as HADOOP_HOME = C:\winutil and in PATH=C:\winutil\bin
          3. Below is my spark code
          object WordCount extends App

          { val conf = new SparkConf() System.setProperty("hadoop.home.dir","C:\\winutil\\") val sc=new SparkContext("local","WordCount",conf) val test=sc.textFile("food.txt") test.flatMap(line=>line.split(" ")) .map(word => (word,1)) .reduceByKey(_+_) .saveAsTextFile("food_output.txt") }
          Show
          shankar_dh shankar added a comment - I have followed this solution – https://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7 but still not able to resolve this issue 17/01/05 22:40:36 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:50594 (size: 9.8 KB, free: 1043.2 MB) 17/01/05 22:40:36 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:12 17/01/05 22:40:36 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) I am running simple spark program in Scala IDE on windows 7, I don't have hadoop installed on my windows. I followed below steps but still why I am failing? 1. I have copied winutils.exe to C:\winutil\bin from https://social.msdn.microsoft.com/forums/azure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight 2. I have set my environment variable as HADOOP_HOME = C:\winutil and in PATH=C:\winutil\bin 3. Below is my spark code object WordCount extends App { val conf = new SparkConf() System.setProperty("hadoop.home.dir","C:\\winutil\\") val sc=new SparkContext("local","WordCount",conf) val test=sc.textFile("food.txt") test.flatMap(line=>line.split(" ")) .map(word => (word,1)) .reduceByKey(_+_) .saveAsTextFile("food_output.txt") }
          Hide
          stevel@apache.org Steve Loughran added a comment -

          I'm sorry you are suffering; it's a pain for all of us who encounter it.

          I would recommend you grab the bin dir from whichever spark version is under here: https://github.com/steveloughran/winutils

          for a test, after setting up your path, try running WINUTILS on the command line and see what happens.

          Show
          stevel@apache.org Steve Loughran added a comment - I'm sorry you are suffering; it's a pain for all of us who encounter it. I would recommend you grab the bin dir from whichever spark version is under here: https://github.com/steveloughran/winutils for a test, after setting up your path, try running WINUTILS on the command line and see what happens.
          Hide
          shankar_dh shankar added a comment -

          Hey thanks a lot Steve..It worked perfectly fine with your given version of winutils.exe file.

          Show
          shankar_dh shankar added a comment - Hey thanks a lot Steve..It worked perfectly fine with your given version of winutils.exe file.
          Hide
          hyukjin.kwon Hyukjin Kwon added a comment -

          Is this really Spark-related issue?

          Show
          hyukjin.kwon Hyukjin Kwon added a comment - Is this really Spark-related issue?

            People

            • Assignee:
              Unassigned
              Reporter:
              kostiantyn Kostiantyn Kudriavtsev
            • Votes:
              14 Vote for this issue
              Watchers:
              27 Start watching this issue

              Dates

              • Created:
                Updated:

                Development