Hadoop Common
  1. Hadoop Common
  2. HADOOP-9989

Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file.

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.1.0-beta
    • Fix Version/s: 2.6.0
    • Component/s: security, util
    • Labels:
      None
    • Environment:

      Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6

    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      The code in JIRA HADOOP-9374's patch introduced a bug, where the value of the
      tokenCacheFile parameter is being parsed as a binary file and set it to the
      mapreduce.job.credentials.json parameter in GenericOptionsParser, which cannot be parsed by JobSubmitter when it gets the value.

      1. HADOOP-9989.patch
        6 kB
        Jinghui Wang
      2. HADOOP-9989.001.patch
        2 kB
        zhihai xu
      3. HADOOP-9989.addendum0.patch
        2 kB
        zhihai xu

        Issue Links

          Activity

          Transition Time In Source Status Execution Times Last Executer Last Execution Date
          Open Open Patch Available Patch Available
          341d 2h 23m 1 zhihai xu 28/Aug/14 02:18
          Patch Available Patch Available Resolved Resolved
          13d 4h 9m 1 Alejandro Abdelnur 10/Sep/14 06:28
          Resolved Resolved Closed Closed
          81d 21h 43m 1 Arun C Murthy 01/Dec/14 03:11
          Kiran Kumar M R made changes -
          Link This issue is related to HADOOP-9374 [ HADOOP-9374 ]
          Arun C Murthy made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Hide
          zhihai xu added a comment -

          I uploaded a patch for MAPREDUCE-6086 . With that patch , we don't need this addendum patch (HADOOP-9989.addendum0.patch).

          Show
          zhihai xu added a comment - I uploaded a patch for MAPREDUCE-6086 . With that patch , we don't need this addendum patch ( HADOOP-9989 .addendum0.patch).
          Hide
          zhihai xu added a comment -

          I created a JIRA MAPREDUCE-6086 to handle all URIs properly in "mapreduce.job.credentials.binary" configuration

          Show
          zhihai xu added a comment - I created a JIRA MAPREDUCE-6086 to handle all URIs properly in "mapreduce.job.credentials.binary" configuration
          Hide
          zhihai xu added a comment -

          Hi [~Alejandro Abdelnur], That is a good question. It is not possible to use non-local FS URIs in "-tokenCacheFile" option parameter
          The current code in GenericOptionsParser.java will always suppose the "-tokenCacheFile" option parameter is local file,
          otherwise It will throw an exception at

                FileSystem localFs = FileSystem.getLocal(conf);
                Path p = localFs.makeQualified(new Path(fileName));
                if (!localFs.exists(p)) {
                    throw new FileNotFoundException("File "+fileName+" does not exist.");
                }
          

          .

          You gave a good suggestion to change "mapreduce.job.credentials.binary" configuration to support all path format absolute and relative URIs.
          I will create a separate JIRAs for this and I will also test all possible path name.

          And also if we want to support non-local FS URIs in "-tokenCacheFile" option parameter, we can change

                if (!localFs.exists(p)) {
                    throw new FileNotFoundException("File "+fileName+" does not exist.");
                }
          

          to

                if (!p.getFileSystem(conf).exists(p)) {
                    throw new FileNotFoundException("File "+fileName+" does not exist.");
                }
          
          Show
          zhihai xu added a comment - Hi [~Alejandro Abdelnur] , That is a good question. It is not possible to use non-local FS URIs in "-tokenCacheFile" option parameter The current code in GenericOptionsParser.java will always suppose the "-tokenCacheFile" option parameter is local file, otherwise It will throw an exception at FileSystem localFs = FileSystem.getLocal(conf); Path p = localFs.makeQualified( new Path(fileName)); if (!localFs.exists(p)) { throw new FileNotFoundException( "File " +fileName+ " does not exist." ); } . You gave a good suggestion to change "mapreduce.job.credentials.binary" configuration to support all path format absolute and relative URIs. I will create a separate JIRAs for this and I will also test all possible path name. And also if we want to support non-local FS URIs in "-tokenCacheFile" option parameter, we can change if (!localFs.exists(p)) { throw new FileNotFoundException( "File " +fileName+ " does not exist." ); } to if (!p.getFileSystem(conf).exists(p)) { throw new FileNotFoundException( "File " +fileName+ " does not exist." ); }
          Hide
          Alejandro Abdelnur added a comment -

          also, is possible to use non-local FS URIs? if so, then the code does not handle that currently, right?

          Show
          Alejandro Abdelnur added a comment - also, is possible to use non-local FS URIs? if so, then the code does not handle that currently, right?
          Hide
          Alejandro Abdelnur added a comment -

          Zhihai,

          Please open a new JIRA to handle all URIs properly. Also, it would be great if we can test all possible URIs work: absolute and relative URIs, if the code is prepending "file://', we should convert the URIs to absolute (and the prepending should use 2 '/', not 3).

          Show
          Alejandro Abdelnur added a comment - Zhihai, Please open a new JIRA to handle all URIs properly. Also, it would be great if we can test all possible URIs work: absolute and relative URIs, if the code is prepending "file://', we should convert the URIs to absolute (and the prepending should use 2 '/', not 3).
          Hide
          zhihai xu added a comment -

          Hi Alejandro Abdelnur, I just found out the "mapreduce.job.credentials.binary" configuration parameter need a path name without URI Scheme.

              String binaryTokenFilename =
                conf.get("mapreduce.job.credentials.binary");
              if (binaryTokenFilename != null) {
                Credentials binary = Credentials.readTokenStorageFile(
                    new Path("file:///" + binaryTokenFilename), conf);
                credentials.addAll(binary);
              }
          

          the MR will add scheme "file:///" to the parameter,
          I update a new patch HADOOP-9989.addendum0.patch to fix this issue:
          use p.toUri().getPath() to remove the URI Scheme.
          Please review it.
          thanks
          zhihai

          Show
          zhihai xu added a comment - Hi Alejandro Abdelnur , I just found out the "mapreduce.job.credentials.binary" configuration parameter need a path name without URI Scheme. String binaryTokenFilename = conf.get( "mapreduce.job.credentials.binary" ); if (binaryTokenFilename != null ) { Credentials binary = Credentials.readTokenStorageFile( new Path( "file: ///" + binaryTokenFilename), conf); credentials.addAll(binary); } the MR will add scheme "file:///" to the parameter, I update a new patch HADOOP-9989 .addendum0.patch to fix this issue: use p.toUri().getPath() to remove the URI Scheme. Please review it. thanks zhihai
          zhihai xu made changes -
          Attachment HADOOP-9989.addendum0.patch [ 12668381 ]
          Hide
          zhihai xu added a comment -

          Thanks a lot Alejandro Abdelnur
          Many thanks to Daryn Sharp for the review and comments.

          Show
          zhihai xu added a comment - Thanks a lot Alejandro Abdelnur Many thanks to Daryn Sharp for the review and comments.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/)
          HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: rev b100949404843ed245ef4e118291f55b3fdc81b8)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/ ) HADOOP-9989 . Bug introduced in HADOOP-9374 , which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: rev b100949404843ed245ef4e118291f55b3fdc81b8) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/)
          HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: rev b100949404843ed245ef4e118291f55b3fdc81b8)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/ ) HADOOP-9989 . Bug introduced in HADOOP-9374 , which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: rev b100949404843ed245ef4e118291f55b3fdc81b8) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/676/)
          HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: rev b100949404843ed245ef4e118291f55b3fdc81b8)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/676/ ) HADOOP-9989 . Bug introduced in HADOOP-9374 , which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: rev b100949404843ed245ef4e118291f55b3fdc81b8) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
          Alejandro Abdelnur made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Hadoop Flags Reviewed [ 10343 ]
          Fix Version/s 2.6.0 [ 12327179 ]
          Resolution Fixed [ 1 ]
          Hide
          Alejandro Abdelnur added a comment -

          Thanks Zhihai and Daryn for reviewing it.

          Committed to trunk and branch-2.

          Show
          Alejandro Abdelnur added a comment - Thanks Zhihai and Daryn for reviewing it. Committed to trunk and branch-2.
          Hide
          Alejandro Abdelnur added a comment -

          After poking Zhiahi offline for a bit on this I think I understand the whole story here. My concern was that this would be an incompatible change, but it is safe to change from json to binary the property because MR submitter code handles both.

          +1

          Show
          Alejandro Abdelnur added a comment - After poking Zhiahi offline for a bit on this I think I understand the whole story here. My concern was that this would be an incompatible change, but it is safe to change from json to binary the property because MR submitter code handles both. +1
          Hide
          zhihai xu added a comment -

          Hi Daryn Sharp, thanks for agreeing to my patch(HADOOP-9989.001.patch).
          It will be safe to keep the mapreduce configuration which will be save in JobContextImpl.credentials by JobSubmitter.java.
          tnanks
          zhihai

          Show
          zhihai xu added a comment - Hi Daryn Sharp , thanks for agreeing to my patch( HADOOP-9989 .001.patch). It will be safe to keep the mapreduce configuration which will be save in JobContextImpl.credentials by JobSubmitter.java. tnanks zhihai
          Hide
          Daryn Sharp added a comment -

          I'm ok with the current patch but I'm not sure if setting the conf is even necessary. This is common code that contains a reference to a mapreduce conf key which I believe is a relic. MR passes tokens in the UGI context during job submission which is why the prior line is reading the credentials from the cmdline option into the UGI. If you've found other references outside of job submission, or feel it's too risky, the current patch is ok.

          Show
          Daryn Sharp added a comment - I'm ok with the current patch but I'm not sure if setting the conf is even necessary. This is common code that contains a reference to a mapreduce conf key which I believe is a relic. MR passes tokens in the UGI context during job submission which is why the prior line is reading the credentials from the cmdline option into the UGI. If you've found other references outside of job submission, or feel it's too risky, the current patch is ok.
          Hide
          zhihai xu added a comment -

          The test failure is not related to this change:

          Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
          Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.193 sec <<< FAILURE! - in org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
          testDelegationTokenAuthenticationURLWithNoDTFilter(org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken) Time elapsed: 0.152 sec <<< ERROR!
          java.net.BindException: Address already in use

          The test is passed in my local build.
          -------------------------------------------------------
          T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.528 sec - in org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken

          Results :

          Tests run: 9, Failures: 0, Errors: 0, Skipped: 0

          Show
          zhihai xu added a comment - The test failure is not related to this change: Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.193 sec <<< FAILURE! - in org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken testDelegationTokenAuthenticationURLWithNoDTFilter(org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken) Time elapsed: 0.152 sec <<< ERROR! java.net.BindException: Address already in use The test is passed in my local build. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.528 sec - in org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken Results : Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12664793/HADOOP-9989.001.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common:

          org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4560//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4560//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12664793/HADOOP-9989.001.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common: org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4560//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4560//console This message is automatically generated.
          Hide
          zhihai xu added a comment -

          Daryn Sharp Are you OK with the second solution, I uploaded a patch "HADOOP-9989.001.patch" for this

           UserGroupInformation.getCurrentUser().addCredentials(
                    Credentials.readTokenStorageFile(p, conf));
                conf.set("mapreduce.job.credentials.binary", p.toString(),
                         "from -tokenCacheFile command line option");
          

          I think use binary file for -tokenCacheFile option is good, because most other hadoop token related command use binary file.
          For example: hadoop fetchdt (hdfs fetchdt --renew).

          Show
          zhihai xu added a comment - Daryn Sharp Are you OK with the second solution, I uploaded a patch " HADOOP-9989 .001.patch" for this UserGroupInformation.getCurrentUser().addCredentials( Credentials.readTokenStorageFile(p, conf)); conf.set( "mapreduce.job.credentials.binary" , p.toString(), "from -tokenCacheFile command line option" ); I think use binary file for -tokenCacheFile option is good, because most other hadoop token related command use binary file. For example: hadoop fetchdt (hdfs fetchdt --renew).
          zhihai xu made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          zhihai xu made changes -
          Assignee zhihai xu [ zxu ]
          zhihai xu made changes -
          Attachment HADOOP-9989.001.patch [ 12664793 ]
          Hide
          zhihai xu added a comment -

          Daryn Sharp I think there is another way to fix this problem using "mapreduce.job.credentials.binary" instead of "mapreduce.job.credentials.json" for
          -tokenCacheFile in the following code:

               UserGroupInformation.getCurrentUser().addCredentials(
                    Credentials.readTokenStorageFile(p, conf));
                conf.set("mapreduce.job.credentials.json", p.toString(),
                         "from -tokenCacheFile command line option");
          

          Currently the -tokenCacheFile option is broken for both MapReduce MR2 and MR1:
          The following code in JobSubmitter.java will parse the file from -tokenCacheFile as JSON file not as binary file.
          But Credentials.readTokenStorageFile(p, conf) will expect it as binary file not JSON file.

              String tokensFileName = conf.get("mapreduce.job.credentials.json");
              if(tokensFileName != null) {
                LOG.info("loading user's secret keys from " + tokensFileName);
                String localFileName = new Path(tokensFileName).toUri().getPath();
          
                boolean json_error = false;
                try {
                  // read JSON
                  ObjectMapper mapper = new ObjectMapper();
                  Map<String, String> nm = 
                    mapper.readValue(new File(localFileName), Map.class);
          
                  for(Map.Entry<String, String> ent: nm.entrySet()) {
                    credentials.addSecretKey(new Text(ent.getKey()), ent.getValue()
                        .getBytes(Charsets.UTF_8));
                  }
                } catch (JsonMappingException e) {
                  json_error = true;
                } catch (JsonParseException e) {
                  json_error = true;
                }
                if(json_error)
                  LOG.warn("couldn't parse Token Cache JSON file with user secret keys");
              }
            }
          
          Show
          zhihai xu added a comment - Daryn Sharp I think there is another way to fix this problem using "mapreduce.job.credentials.binary" instead of "mapreduce.job.credentials.json" for -tokenCacheFile in the following code: UserGroupInformation.getCurrentUser().addCredentials( Credentials.readTokenStorageFile(p, conf)); conf.set( "mapreduce.job.credentials.json" , p.toString(), "from -tokenCacheFile command line option" ); Currently the -tokenCacheFile option is broken for both MapReduce MR2 and MR1: The following code in JobSubmitter.java will parse the file from -tokenCacheFile as JSON file not as binary file. But Credentials.readTokenStorageFile(p, conf) will expect it as binary file not JSON file. String tokensFileName = conf.get( "mapreduce.job.credentials.json" ); if (tokensFileName != null ) { LOG.info( "loading user's secret keys from " + tokensFileName); String localFileName = new Path(tokensFileName).toUri().getPath(); boolean json_error = false ; try { // read JSON ObjectMapper mapper = new ObjectMapper(); Map< String , String > nm = mapper.readValue( new File(localFileName), Map.class); for (Map.Entry< String , String > ent: nm.entrySet()) { credentials.addSecretKey( new Text(ent.getKey()), ent.getValue() .getBytes(Charsets.UTF_8)); } } catch (JsonMappingException e) { json_error = true ; } catch (JsonParseException e) { json_error = true ; } if (json_error) LOG.warn( "couldn't parse Token Cache JSON file with user secret keys" ); } }
          Hide
          Daryn Sharp added a comment -

          A polite -1. We have use cases where fetchdt is used to create a token cache file that will be used in conjunction with -tokenCacheFile. We might consider something like using the file suffix to determine if it's json.

          Out of curiosity, what generates a json token cache file?

          Show
          Daryn Sharp added a comment - A polite -1. We have use cases where fetchdt is used to create a token cache file that will be used in conjunction with -tokenCacheFile. We might consider something like using the file suffix to determine if it's json. Out of curiosity, what generates a json token cache file?
          Hide
          Sheng Liu added a comment -

          +1 I have the same issue, mrv1 in CDH also needs "mapreduce.job.credentials.json" be a json file

          Show
          Sheng Liu added a comment - +1 I have the same issue, mrv1 in CDH also needs "mapreduce.job.credentials.json" be a json file
          Eli Collins made changes -
          Target Version/s 3.0.0, 2.1.1-beta [ 12320357, 12324807 ] 2.2.1 [ 12325254 ]
          Eli Collins made changes -
          Fix Version/s 3.0.0 [ 12320357 ]
          Fix Version/s 2.2.1 [ 12325254 ]
          Steve Loughran made changes -
          Fix Version/s 2.2.1 [ 12325254 ]
          Fix Version/s 2.2.0 [ 12325048 ]
          Arun C Murthy made changes -
          Fix Version/s 2.1.2-beta [ 12325048 ]
          Fix Version/s 2.1.1-beta [ 12324807 ]
          Jinghui Wang made changes -
          Original Estimate 0h [ 0 ]
          Remaining Estimate 0h [ 0 ]
          Jinghui Wang made changes -
          Field Original Value New Value
          Attachment HADOOP-9989.patch [ 12604334 ]
          Hide
          Jinghui Wang added a comment -

          Based on how JobSubmitter process the value of the mapreduce.job.credentials.json key, the tokenCacheFile passed in with the command line should be a JSON file, which makes sense as the GenericOptionsParser reads the value of this file and set it to the mapreduce.job.credentials.json in the configuration originally. Attached patch is based on the assumption that the tokenCacheFile is a JSON file.

          Show
          Jinghui Wang added a comment - Based on how JobSubmitter process the value of the mapreduce.job.credentials.json key, the tokenCacheFile passed in with the command line should be a JSON file, which makes sense as the GenericOptionsParser reads the value of this file and set it to the mapreduce.job.credentials.json in the configuration originally. Attached patch is based on the assumption that the tokenCacheFile is a JSON file.
          Jinghui Wang created issue -

            People

            • Assignee:
              zhihai xu
              Reporter:
              Jinghui Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development