Lucene - Core
  1. Lucene - Core
  2. LUCENE-5650

Enforce read-only access to any path outside the temporary folder via security manager

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 5.0, Trunk
    • Component/s: general/test
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      The recent refactoring to all the create temp file/dir functions (which is great!) has a minor regression from what existed before. With the old LuceneTestCase.TEMP_DIR, the directory was created if it did not exist. So, if you set java.io.tmpdir to "./temp", then it would create that dir within the per jvm working dir. However, getBaseTempDirForClass() now does asserts that check the dir exists, is a dir, and is writeable.

      Lucene uses "." as java.io.tmpdir. Then in the test security manager, the per jvm cwd has read/write/execute permissions. However, this allows tests to write to their cwd, which I'm trying to protect against (by setting cwd to read/execute in my test security manager).

      1. LUCENE-5650.patch
        3 kB
        Ryan Ernst
      2. LUCENE-5650.patch
        17 kB
        Dawid Weiss
      3. LUCENE-5650.patch
        17 kB
        Ryan Ernst
      4. LUCENE-5650.patch
        23 kB
        Ryan Ernst
      5. dih.patch
        3 kB
        Ryan Ernst

        Issue Links

          Activity

          Hide
          Dawid Weiss added a comment -

          Ryan, is this for a use case in Lucene/ Solr or in your own derived code? I'm trying to think how to best handle this. mkdirs() is obviously possible too, but trying to get the bigger picture first.

          Show
          Dawid Weiss added a comment - Ryan, is this for a use case in Lucene/ Solr or in your own derived code? I'm trying to think how to best handle this. mkdirs() is obviously possible too, but trying to get the bigger picture first.
          Hide
          Robert Muir added a comment -

          Really this should be fixed here too (IMO).

          I tried to do this, but its currently hung on issues in a couple solr contribs: LUCENE-5154

          Show
          Robert Muir added a comment - Really this should be fixed here too (IMO). I tried to do this, but its currently hung on issues in a couple solr contribs: LUCENE-5154
          Hide
          Ryan Ernst added a comment -

          This is for my code using lucene/solr, in which I use the test framework. I agree with Robert it would be better to fix this in lucene as well (so tests cannot write to cwd).

          Show
          Ryan Ernst added a comment - This is for my code using lucene/solr, in which I use the test framework. I agree with Robert it would be better to fix this in lucene as well (so tests cannot write to cwd).
          Hide
          Robert Muir added a comment -

          The main upside is it increases sandboxing: being lazy and letting tests read/write to CWD means you have a higher possibility of failures because tests "interfere with each other". When they make their own temp dirs this doesnt happen.

          But, as soon as a logging framework gets in my way on something, i drop out. I just cant deal with them

          Maybe someone else can follow thru the final inch so we can ban this, or at least only allow it for the modules that aren't fixed.

          Show
          Robert Muir added a comment - The main upside is it increases sandboxing: being lazy and letting tests read/write to CWD means you have a higher possibility of failures because tests "interfere with each other". When they make their own temp dirs this doesnt happen. But, as soon as a logging framework gets in my way on something, i drop out. I just cant deal with them Maybe someone else can follow thru the final inch so we can ban this, or at least only allow it for the modules that aren't fixed.
          Hide
          Dawid Weiss added a comment -

          > I agree with Robert it would be better to fix this in lucene as well (so tests cannot write to cwd).

          In Lucene and in Solr this bit is kind of taken care of because junit4 ANT task isolates cwds of each forked JVM. I agree though that tests shouldn't be allowed to write to cwd.

          Would you be able to provide a patch that would change the default behavior of getBaseTempDirForClass() to mkdirs() if it doesn't exists (and fail if the directory couldn't be created)? It would be also nice if you could provide a permission policy update; at least we'd see which tests are currently offenders.

          I've just returned from short holidays and I won't be able to look into it until the end of the week; trying to dig myself out of the stuff that piled up.

          Show
          Dawid Weiss added a comment - > I agree with Robert it would be better to fix this in lucene as well (so tests cannot write to cwd). In Lucene and in Solr this bit is kind of taken care of because junit4 ANT task isolates cwds of each forked JVM. I agree though that tests shouldn't be allowed to write to cwd. Would you be able to provide a patch that would change the default behavior of getBaseTempDirForClass() to mkdirs() if it doesn't exists (and fail if the directory couldn't be created)? It would be also nice if you could provide a permission policy update; at least we'd see which tests are currently offenders. I've just returned from short holidays and I won't be able to look into it until the end of the week; trying to dig myself out of the stuff that piled up.
          Hide
          Ryan Ernst added a comment -

          Here's a patch with the mkdirs() and test policy changes. Interestingly, hunspell tests all seem to fail. They depend on java.io.tmpdir already existing. I think this worked before in Robert's old patch because LuceneTestCase.TEMP_DIR was created in a static block, so before the tests actually ran. Not sure what to do about that...can we initialize tempDirBase in a static block like before?

          Show
          Ryan Ernst added a comment - Here's a patch with the mkdirs() and test policy changes. Interestingly, hunspell tests all seem to fail. They depend on java.io.tmpdir already existing. I think this worked before in Robert's old patch because LuceneTestCase.TEMP_DIR was created in a static block, so before the tests actually ran. Not sure what to do about that...can we initialize tempDirBase in a static block like before?
          Hide
          Ryan Ernst added a comment -

          I created LUCENE-5655 to address the hunspell/suggester issue with OfflineSorter.

          Show
          Ryan Ernst added a comment - I created LUCENE-5655 to address the hunspell/suggester issue with OfflineSorter.
          Hide
          Robert Muir added a comment -

          In Lucene and in Solr this bit is kind of taken care of because junit4 ANT task isolates cwds of each forked JVM

          Just to clarify: I'm not talking about per-jvm sandboxing. I'm talking about sandboxing individual tests that run inside the same JVM from each other.

          If they do I/O to "shared places" such as their per-JVM CWD (versus their own private per-test temporary directories), then its possible for them to interfere with each other. This can be quite difficult to debug.

          Show
          Robert Muir added a comment - In Lucene and in Solr this bit is kind of taken care of because junit4 ANT task isolates cwds of each forked JVM Just to clarify: I'm not talking about per-jvm sandboxing. I'm talking about sandboxing individual tests that run inside the same JVM from each other. If they do I/O to "shared places" such as their per-JVM CWD (versus their own private per-test temporary directories), then its possible for them to interfere with each other. This can be quite difficult to debug.
          Hide
          Dawid Weiss added a comment -

          Hmm.. I stopped receiving jira updates for some reason – didn't see your replies, guys. Investigating.

          Show
          Dawid Weiss added a comment - Hmm.. I stopped receiving jira updates for some reason – didn't see your replies, guys. Investigating.
          Hide
          Dawid Weiss added a comment -

          Uwe Schindler pointed me at the Apache blog wrt mail problems. Bad timing.

          Anyway, Ryan:

          LuceneTestCase.TEMP_DIR was created in a static block, so before the tests actually ran. Not sure what to do about that...can we initialize tempDirBase in a static block like before?

          Don't use static blocks in tests. Static blocks are executed during class loading and any code inside them is executed before the tests even commence. This effectively prevents any sandboxing/ checks the test runner attempts to provide. Also, it's not really predictable when these blocks will execute. The right way to execute one-time initialization in JUnit is via BeforeClass hooks or a class rule.

          Thanks for the patch, it looks good to me. I'll do some testing and commit it in. Sorry about the delay.

          Show
          Dawid Weiss added a comment - Uwe Schindler pointed me at the Apache blog wrt mail problems. Bad timing. Anyway, Ryan: LuceneTestCase.TEMP_DIR was created in a static block, so before the tests actually ran. Not sure what to do about that...can we initialize tempDirBase in a static block like before? Don't use static blocks in tests. Static blocks are executed during class loading and any code inside them is executed before the tests even commence. This effectively prevents any sandboxing/ checks the test runner attempts to provide. Also, it's not really predictable when these blocks will execute. The right way to execute one-time initialization in JUnit is via BeforeClass hooks or a class rule. Thanks for the patch, it looks good to me. I'll do some testing and commit it in. Sorry about the delay.
          Hide
          Dawid Weiss added a comment -

          This is an adjustment to Ryan's patch. I moved a lot of the temp-file related code out of LuceneTestCase (leaving appropriate delegate calls to the rule code and its logic).

          A side-effect of this is that temp. dir gets created before any test code below the rule is executed. This helps hunspell tests.

          All Lucene test pass. Solr's does have a few offenders still; haven't looked at them yet.

          Show
          Dawid Weiss added a comment - This is an adjustment to Ryan's patch. I moved a lot of the temp-file related code out of LuceneTestCase (leaving appropriate delegate calls to the rule code and its logic). A side-effect of this is that temp. dir gets created before any test code below the rule is executed. This helps hunspell tests. All Lucene test pass. Solr's does have a few offenders still; haven't looked at them yet.
          Hide
          Dawid Weiss added a comment -

          I think this patch is great. Ran Solr tests and it clearly shows that there are things the tests shouldn't be doing.

            2> 175732 T527 oasc.CoreContainer.recordAndThrow ERROR Unable to create core: collection1 org.apache.solr.common.SolrException: access denied ("java.io.FilePermission" "C:\Work\lucene\trunk\solr\example\solr\collection1\data\index" "write")
            2> 	at org.apache.solr.core.SolrCore.<init>(SolrCore.java:869)
            2> 	at org.apache.solr.core.SolrCore.<init>(SolrCore.java:644)
            2> 	at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556)
            2> 	at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261)
            2> 	at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253)
            2> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
            2> 	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
            2> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
            2> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
            2> 	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
            2> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
            2> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
            2> 	at java.lang.Thread.run(Thread.java:722)
            2> Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "C:\Work\lucene\trunk\solr\example\solr\collection1\data\index" "write")
            2> 	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:366)
            2> 	at java.security.AccessController.checkPermission(AccessController.java:555)
            2> 	at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
            2> 	at java.lang.SecurityManager.checkWrite(SecurityManager.java:979)
            2> 	at java.io.File.mkdir(File.java:1237)
            2> 	at java.io.File.mkdirs(File.java:1266)
            2> 	at org.apache.lucene.store.NativeFSLock.obtain(NativeFSLockFactory.java:136)
            2> 	at org.apache.lucene.store.MockLockFactoryWrapper$MockLock.obtain(MockLockFactoryWrapper.java:72)
            2> 	at org.apache.lucene.store.Lock.obtain(Lock.java:77)
            2> 	at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:714)
            2> 	at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77)
            2> 	at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
            2> 	at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:521)
            2> 	at org.apache.solr.core.SolrCore.<init>(SolrCore.java:775)
            2> 	... 12 more
          
          Show
          Dawid Weiss added a comment - I think this patch is great. Ran Solr tests and it clearly shows that there are things the tests shouldn't be doing. 2> 175732 T527 oasc.CoreContainer.recordAndThrow ERROR Unable to create core: collection1 org.apache.solr.common.SolrException: access denied ( "java.io.FilePermission" "C:\Work\lucene\trunk\solr\example\solr\collection1\data\index" "write" ) 2> at org.apache.solr.core.SolrCore.<init>(SolrCore.java:869) 2> at org.apache.solr.core.SolrCore.<init>(SolrCore.java:644) 2> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556) 2> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261) 2> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253) 2> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 2> at java.util.concurrent.FutureTask.run(FutureTask.java:166) 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 2> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 2> at java.util.concurrent.FutureTask.run(FutureTask.java:166) 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 2> at java.lang. Thread .run( Thread .java:722) 2> Caused by: java.security.AccessControlException: access denied ( "java.io.FilePermission" "C:\Work\lucene\trunk\solr\example\solr\collection1\data\index" "write" ) 2> at java.security.AccessControlContext.checkPermission(AccessControlContext.java:366) 2> at java.security.AccessController.checkPermission(AccessController.java:555) 2> at java.lang. SecurityManager .checkPermission( SecurityManager .java:549) 2> at java.lang. SecurityManager .checkWrite( SecurityManager .java:979) 2> at java.io.File.mkdir(File.java:1237) 2> at java.io.File.mkdirs(File.java:1266) 2> at org.apache.lucene.store.NativeFSLock.obtain(NativeFSLockFactory.java:136) 2> at org.apache.lucene.store.MockLockFactoryWrapper$MockLock.obtain(MockLockFactoryWrapper.java:72) 2> at org.apache.lucene.store.Lock.obtain(Lock.java:77) 2> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:714) 2> at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:77) 2> at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64) 2> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:521) 2> at org.apache.solr.core.SolrCore.<init>(SolrCore.java:775) 2> ... 12 more
          Hide
          Ryan Ernst added a comment -

          I spun off SOLR-6055 for that particular issue. I'm not exactly sure how to solve it (looks like a real bug).

          Show
          Ryan Ernst added a comment - I spun off SOLR-6055 for that particular issue. I'm not exactly sure how to solve it (looks like a real bug).
          Hide
          Ryan Ernst added a comment -

          Thanks Dawid, your refactorings look good.

          This patch adds fixes for the solr tests. There is one nocommit for SOLR-6055.

          Show
          Ryan Ernst added a comment - Thanks Dawid, your refactorings look good. This patch adds fixes for the solr tests. There is one nocommit for SOLR-6055 .
          Hide
          Ryan Ernst added a comment -

          New patch. All tests pass.

          I fixed SOLR-6055 in this by adding a separate update log dir to SolrCore, which is independently forced to be absolute (for the local filesystem, not the DirectoryFactory). I also made the javaTempDir in the new test rule always absolute so that the ulog isAbsolute check would work.

          Show
          Ryan Ernst added a comment - New patch. All tests pass. I fixed SOLR-6055 in this by adding a separate update log dir to SolrCore, which is independently forced to be absolute (for the local filesystem, not the DirectoryFactory). I also made the javaTempDir in the new test rule always absolute so that the ulog isAbsolute check would work.
          Hide
          Dawid Weiss added a comment -

          Awesome, thanks Ryan! I'll rerun the tests in the evening and then commit.

          Show
          Dawid Weiss added a comment - Awesome, thanks Ryan! I'll rerun the tests in the evening and then commit.
          Hide
          Dawid Weiss added a comment -

          Hi Ryan. Just browsed through the patch. Is this commented out nocommit still applicable?

          +          // nocommit: this check needs to be fixed, see SOLR-6055
          +          //if (!new File(dataDir).isAbsolute()) {
          

          I'm running the tests now.

          Show
          Dawid Weiss added a comment - Hi Ryan. Just browsed through the patch. Is this commented out nocommit still applicable? + // nocommit: this check needs to be fixed, see SOLR-6055 + // if (! new File(dataDir).isAbsolute()) { I'm running the tests now.
          Hide
          Dawid Weiss added a comment -

          I think your last patch was inconsistent – it didn't include the new rule, for example (only patched javaTempDir.getAbsoluteFile).

          I've tried to consolidate all of it and pushed it to a branch:
          https://svn.apache.org/repos/asf/lucene/dev/branches/lucene5650

          I'm running the tests right now. I still didn't know what to make of the nocommit – please fix it on the branch, add CHANGES entry and I think we're ready to go?

          Show
          Dawid Weiss added a comment - I think your last patch was inconsistent – it didn't include the new rule, for example (only patched javaTempDir.getAbsoluteFile). I've tried to consolidate all of it and pushed it to a branch: https://svn.apache.org/repos/asf/lucene/dev/branches/lucene5650 I'm running the tests right now. I still didn't know what to make of the nocommit – please fix it on the branch, add CHANGES entry and I think we're ready to go?
          Hide
          Dawid Weiss added a comment -

          org.apache.lucene.search.join.TestBlockJoin fails for me on the branch, but I think it's an unrelated issue (it also fails on trunk).

          I think I screwed up something when merging your patch though because Solr tests fail for me with access denied. Could you take a look at the branch and diff with your own code?

          Show
          Dawid Weiss added a comment - org.apache.lucene.search.join.TestBlockJoin fails for me on the branch, but I think it's an unrelated issue (it also fails on trunk). I think I screwed up something when merging your patch though because Solr tests fail for me with access denied. Could you take a look at the branch and diff with your own code?
          Hide
          ASF subversion and git services added a comment -

          Commit 1595546 from Ryan Ernst in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1595546 ]

          LUCENE-5650: fix some solr tests

          Show
          ASF subversion and git services added a comment - Commit 1595546 from Ryan Ernst in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1595546 ] LUCENE-5650 : fix some solr tests
          Hide
          Ryan Ernst added a comment -

          Sorry about that. The nocommit was left by mistake. The failure was a goof on my part. I've put a fix for it in the branch.

          Show
          Ryan Ernst added a comment - Sorry about that. The nocommit was left by mistake. The failure was a goof on my part. I've put a fix for it in the branch.
          Hide
          ASF subversion and git services added a comment -

          Commit 1595551 from Ryan Ernst in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1595551 ]

          LUCENE-5650: fix one more solr test

          Show
          ASF subversion and git services added a comment - Commit 1595551 from Ryan Ernst in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1595551 ] LUCENE-5650 : fix one more solr test
          Hide
          ASF subversion and git services added a comment -

          Commit 1595562 from Ryan Ernst in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1595562 ]

          LUCENE-5650: fix solrj test

          Show
          ASF subversion and git services added a comment - Commit 1595562 from Ryan Ernst in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1595562 ] LUCENE-5650 : fix solrj test
          Hide
          Dawid Weiss added a comment -

          Are you sure you don't have any exclusion properties in your default setup? Because I still see a number of failures. I just fixed two tests in map-reduce, but there's a number of other failures left:

          [22:26:32.432] ERROR   0.00s J3 | TestICUCollationFieldOptions (suite) <<<
             > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ("java.io.FilePermission" "analysis-extras\solr\collection1\conf" "write")
          
          [22:26:32.459] ERROR   0.00s J2 | TestFoldingMultitermExtrasQuery (suite) <<<
             > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ("java.io.FilePermission" "analysis-extras\solr\collection1\conf" "write")
          
          [22:26:48.767] ERROR   0.00s J1 | CarrotClusteringEngineTest (suite) <<<
             > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ("java.io.FilePermission" "clustering\solr\collection1\conf" "write")
          
          [22:27:52.483] FAILURE 0.04s | TestDocBuilder.testDeltaImportNoRows_MustNotCommit <<<
          (caused by, in the logs)
            2> Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" ".\dataimport.properties" "write")
          
          [22:31:24.108] ERROR   0.36s | VelocityResponseWriterTest.testSolrResourceLoaderTemplate <<<
             > Throwable #1: java.lang.RuntimeException: org.apache.velocity.exception.VelocityException: Error initializing log: Failed to initialize an instance of org.apache.velocity.runtime.log.Log4JLogChute with the current runtime configuration.
          ...
             > Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "velocity.log" "write")
          [22:26:07.003] ERROR   0.05s J0 | TestLBHttpSolrServer.testReliability <<<
             > Throwable #1: java.security.AccessControlException: access denied ("java.io.FilePermission" "org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1400444767005\solr\collection10" "write")
          

          These are the few I quickly noticed. Did they pass for you? My master seed was 324BDB00052353EE

          Show
          Dawid Weiss added a comment - Are you sure you don't have any exclusion properties in your default setup? Because I still see a number of failures. I just fixed two tests in map-reduce, but there's a number of other failures left: [22:26:32.432] ERROR 0.00s J3 | TestICUCollationFieldOptions (suite) <<< > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ( "java.io.FilePermission" "analysis-extras\solr\collection1\conf" "write" ) [22:26:32.459] ERROR 0.00s J2 | TestFoldingMultitermExtrasQuery (suite) <<< > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ( "java.io.FilePermission" "analysis-extras\solr\collection1\conf" "write" ) [22:26:48.767] ERROR 0.00s J1 | CarrotClusteringEngineTest (suite) <<< > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ( "java.io.FilePermission" "clustering\solr\collection1\conf" "write" ) [22:27:52.483] FAILURE 0.04s | TestDocBuilder.testDeltaImportNoRows_MustNotCommit <<< (caused by, in the logs) 2> Caused by: java.security.AccessControlException: access denied ( "java.io.FilePermission" ".\dataimport.properties" "write" ) [22:31:24.108] ERROR 0.36s | VelocityResponseWriterTest.testSolrResourceLoaderTemplate <<< > Throwable #1: java.lang.RuntimeException: org.apache.velocity.exception.VelocityException: Error initializing log: Failed to initialize an instance of org.apache.velocity.runtime.log.Log4JLogChute with the current runtime configuration. ... > Caused by: java.security.AccessControlException: access denied ( "java.io.FilePermission" "velocity.log" "write" ) [22:26:07.003] ERROR 0.05s J0 | TestLBHttpSolrServer.testReliability <<< > Throwable #1: java.security.AccessControlException: access denied ( "java.io.FilePermission" "org.apache.solr.client.solrj.TestLBHttpSolrServer$SolrInstance-1400444767005\solr\collection10" "write" ) These are the few I quickly noticed. Did they pass for you? My master seed was 324BDB00052353EE
          Hide
          Ryan Ernst added a comment -

          These are the few I quickly noticed. Did they pass for you?

          I have now realized I've been running these tests all wrong. I had one unrelated failure in my run, which I did not realize would stop other tests from running. I've now run with -Dtests.haltonfailure=false (thanks for the tip) and I see all these failures.

          Show
          Ryan Ernst added a comment - These are the few I quickly noticed. Did they pass for you? I have now realized I've been running these tests all wrong. I had one unrelated failure in my run, which I did not realize would stop other tests from running. I've now run with -Dtests.haltonfailure=false (thanks for the tip) and I see all these failures.
          Hide
          Dawid Weiss added a comment -

          No problem. I'll be polishing my Buzzwords presentation today, but once I'm done I'll try to take a look at these too. There seems to be a common underlying cause (logger sinks, test collection) so hopefully it's not as bad as the count of errors suggests.

          Still, I think it's a really valuable improvement and worth persuing.

          Show
          Dawid Weiss added a comment - No problem. I'll be polishing my Buzzwords presentation today, but once I'm done I'll try to take a look at these too. There seems to be a common underlying cause (logger sinks, test collection) so hopefully it's not as bad as the count of errors suggests. Still, I think it's a really valuable improvement and worth persuing.
          Hide
          ASF subversion and git services added a comment -

          Commit 1596094 from Ryan Ernst in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1596094 ]

          LUCENE-5650: fix more contrib tests

          Show
          ASF subversion and git services added a comment - Commit 1596094 from Ryan Ernst in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1596094 ] LUCENE-5650 : fix more contrib tests
          Hide
          Ryan Ernst added a comment -

          Ok I fixed a bunch more contrib tests. There are still failures for DIH and velocity (same ones that Robert noted in LUCENE-5154).

          Show
          Ryan Ernst added a comment - Ok I fixed a bunch more contrib tests. There are still failures for DIH and velocity (same ones that Robert noted in LUCENE-5154 ).
          Hide
          Dawid Weiss added a comment -

          I handled some of the velocity and DIH. Running the tests now.

          Show
          Dawid Weiss added a comment - I handled some of the velocity and DIH. Running the tests now.
          Hide
          Ryan Ernst added a comment -

          Looks like we overlapped on fixing these. I like how you handle velocity better than me (I hacked through a way to set the log file). But I'm not sure I like the DIH change. I think it is bogus to default to the CWD if running without a core (which seems to only happen in tests?). I changed this to default to solr.solr.home, then set this to a temp dir in the abstract DIH test base (see attached patch).

          Show
          Ryan Ernst added a comment - Looks like we overlapped on fixing these. I like how you handle velocity better than me (I hacked through a way to set the log file). But I'm not sure I like the DIH change. I think it is bogus to default to the CWD if running without a core (which seems to only happen in tests?). I changed this to default to solr.solr.home, then set this to a temp dir in the abstract DIH test base (see attached patch).
          Hide
          Dawid Weiss added a comment -

          Commit it to the branch, Ryan. I fixed it the way I understand how Solr's source code works (which is to say: vaguely familiar). I'm sure your patch is better.

          The build yesterday ended with this failure related to perm. denied:

          [13:07:35.284] ERROR   0.00s J1 | TestFoldingMultitermExtrasQuery (suite) <<<    
          > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ("java.io.FilePermission" "analysis-extras\solr\collection1\conf" "write")
          

          and the following, I guess notorious offenders:

            - org.apache.solr.spelling.suggest.TestAnalyzeInfixSuggestions (could not remove temp. files)
            - TestBlendedInfixSuggestions (same thing)
          
            - org.apache.solr.cloud.SyncSliceTest.testDistribSearch
             > Throwable #1: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request
          
            - org.apache.solr.cloud.RecoveryZkTest.testDistribSearch 
             > Throwable #1: java.lang.AssertionError: shard1 is not consistent.  Got 92 from https://127.0.0.1:54379/_koq/fr/collection1lastClient and got 60 from https://127.0.0.1:54410/_koq/fr/collection1
          

          I won't be able to return to this today (maybe in the evening). Change my DIH fixes to yours on the branch – I don't mind at all.

          Show
          Dawid Weiss added a comment - Commit it to the branch, Ryan. I fixed it the way I understand how Solr's source code works (which is to say: vaguely familiar). I'm sure your patch is better. The build yesterday ended with this failure related to perm. denied: [13:07:35.284] ERROR 0.00s J1 | TestFoldingMultitermExtrasQuery (suite) <<< > Throwable #1: org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due to init failure: access denied ( "java.io.FilePermission" "analysis-extras\solr\collection1\conf" "write" ) and the following, I guess notorious offenders: - org.apache.solr.spelling.suggest.TestAnalyzeInfixSuggestions (could not remove temp. files) - TestBlendedInfixSuggestions (same thing) - org.apache.solr.cloud.SyncSliceTest.testDistribSearch > Throwable #1: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request - org.apache.solr.cloud.RecoveryZkTest.testDistribSearch > Throwable #1: java.lang.AssertionError: shard1 is not consistent. Got 92 from https: //127.0.0.1:54379/_koq/fr/collection1lastClient and got 60 from https://127.0.0.1:54410/_koq/fr/collection1 I won't be able to return to this today (maybe in the evening). Change my DIH fixes to yours on the branch – I don't mind at all.
          Hide
          Dawid Weiss added a comment -

          One comment wrt the dih patch:

          +    System.clearProperty("solr.solr.home");
          

          I think there is a restore-sys-props rule somewhere in the upper class that will take care of this. In Lucene there is no such rule, but in Solr so many properties get set (even from other software packages) that it didn't make sense to track them all manually. You'd have to check though, I may be wrong.

          Show
          Dawid Weiss added a comment - One comment wrt the dih patch: + System .clearProperty( "solr.solr.home" ); I think there is a restore-sys-props rule somewhere in the upper class that will take care of this. In Lucene there is no such rule, but in Solr so many properties get set (even from other software packages) that it didn't make sense to track them all manually. You'd have to check though, I may be wrong.
          Hide
          ASF subversion and git services added a comment -

          Commit 1596472 from Dawid Weiss in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1596472 ]

          LUCENE-5650: applying Ryan's DIH patch.

          Show
          ASF subversion and git services added a comment - Commit 1596472 from Dawid Weiss in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1596472 ] LUCENE-5650 : applying Ryan's DIH patch.
          Hide
          ASF subversion and git services added a comment -

          Commit 1596475 from Dawid Weiss in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1596475 ]

          LUCENE-5650: fixed solr home to rw access by copying

          Show
          ASF subversion and git services added a comment - Commit 1596475 from Dawid Weiss in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1596475 ] LUCENE-5650 : fixed solr home to rw access by copying
          Hide
          ASF subversion and git services added a comment -

          Commit 1596497 from Dawid Weiss in branch 'dev/branches/lucene5650'
          [ https://svn.apache.org/r1596497 ]

          SOLR-6100, LUCENE-5650: fix an uncloseable file leak in solr suggesters.

          Show
          ASF subversion and git services added a comment - Commit 1596497 from Dawid Weiss in branch 'dev/branches/lucene5650' [ https://svn.apache.org/r1596497 ] SOLR-6100 , LUCENE-5650 : fix an uncloseable file leak in solr suggesters.
          Hide
          Dawid Weiss added a comment -

          All tests passed for me with the current state of the branch (including nightlies).

          Show
          Dawid Weiss added a comment - All tests passed for me with the current state of the branch (including nightlies).
          Hide
          Ryan Ernst added a comment -

          +1, everything looks good to me (and test pass for me as well).

          Show
          Ryan Ernst added a comment - +1, everything looks good to me (and test pass for me as well).
          Hide
          Dawid Weiss added a comment -

          Please commit it to trunk, Ryan! I'll be at work in ~9hours so should something pop up in jenkins runs I'll take care of these.

          Show
          Dawid Weiss added a comment - Please commit it to trunk, Ryan! I'll be at work in ~9hours so should something pop up in jenkins runs I'll take care of these.
          Hide
          Dawid Weiss added a comment -

          I'm merging with the trunk right now. Will commit in a moment.

          Show
          Dawid Weiss added a comment - I'm merging with the trunk right now. Will commit in a moment.
          Hide
          ASF subversion and git services added a comment -

          Commit 1596767 from Dawid Weiss in branch 'dev/trunk'
          [ https://svn.apache.org/r1596767 ]

          LUCENE-5650: Enforce read-only access to any path outside the temporary folder via security manager

          Show
          ASF subversion and git services added a comment - Commit 1596767 from Dawid Weiss in branch 'dev/trunk' [ https://svn.apache.org/r1596767 ] LUCENE-5650 : Enforce read-only access to any path outside the temporary folder via security manager
          Hide
          Dawid Weiss added a comment -

          I've committed this patch to trunk. Let it bake a bit before backporting to 4x. I've made a few cosmetic changes while merging so if backporting, use commit changeset 1596767 from trunk.

          Show
          Dawid Weiss added a comment - I've committed this patch to trunk. Let it bake a bit before backporting to 4x. I've made a few cosmetic changes while merging so if backporting, use commit changeset 1596767 from trunk.
          Hide
          Steve Rowe added a comment - - edited

          I'm seeing what look like security manager-related exceptions on trunk with o.a.s.search.TestRecoveryHdfs on OS X 10.9.3 w/Oracle Java 1.7.0_55 - here's the first exception (9 out of 9 non-ignored tests fail):

             [junit4] <JUnit4> says 你好! Master seed: 37AA21AA8F6886DF
             [junit4] Executing 1 suite with 1 JVM.
             [junit4] 
             [junit4] Started J0 PID(36452@smb.local).
             [junit4] Suite: org.apache.solr.search.TestRecoveryHdfs
          [...]
             [junit4]   2> 7871 T118 oasc.CoreContainer.recordAndThrow ERROR Unable to create core: collection1 org.apache.solr.common.SolrException: Problem creating directory: solr_hdfs_home/collection1/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0/temp/solr.search.TestRecoveryHdfs-37AA21AA8F6886DF-001/init-core-data-001
             [junit4]   2> 	at org.apache.solr.core.SolrCore.<init>(SolrCore.java:885)
             [junit4]   2> 	at org.apache.solr.core.SolrCore.<init>(SolrCore.java:649)
             [junit4]   2> 	at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556)
             [junit4]   2> 	at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261)
             [junit4]   2> 	at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253)
             [junit4]   2> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
             [junit4]   2> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
             [junit4]   2> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
             [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
             [junit4]   2> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
             [junit4]   2> 	at java.lang.Thread.run(Thread.java:745)
             [junit4]   2> Caused by: java.lang.RuntimeException: Problem creating directory: solr_hdfs_home/collection1/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0/temp/solr.search.TestRecoveryHdfs-37AA21AA8F6886DF-001/init-core-data-001
             [junit4]   2> 	at org.apache.solr.store.hdfs.HdfsDirectory.<init>(HdfsDirectory.java:87)
             [junit4]   2> 	at org.apache.solr.core.HdfsDirectoryFactory.create(HdfsDirectoryFactory.java:148)
             [junit4]   2> 	at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:351)
             [junit4]   2> 	at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:273)
             [junit4]   2> 	at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:485)
             [junit4]   2> 	at org.apache.solr.core.SolrCore.<init>(SolrCore.java:791)
             [junit4]   2> 	... 10 more
             [junit4]   2> Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0" "write")
             [junit4]   2> 	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
             [junit4]   2> 	at java.security.AccessController.checkPermission(AccessController.java:559)
             [junit4]   2> 	at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
             [junit4]   2> 	at java.lang.SecurityManager.checkWrite(SecurityManager.java:979)
             [junit4]   2> 	at java.io.File.mkdir(File.java:1305)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
             [junit4]   2> 	at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:584)
             [junit4]   2> 	at org.apache.solr.store.hdfs.HdfsDirectory.<init>(HdfsDirectory.java:63)
             [junit4]   2> 	... 15 more
             [junit4]
          [...] 
             [junit4] Tests with failures:
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testLogReplay
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testRemoveOldLogs
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testBufferingFlags
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testCleanShutdown
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testCorruptLog
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testVersionsOnRestart
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testRecoveryMultipleLogs
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testTruncatedLog
             [junit4]   - org.apache.solr.search.TestRecoveryHdfs.testBuffering
          
          Show
          Steve Rowe added a comment - - edited I'm seeing what look like security manager-related exceptions on trunk with o.a.s.search.TestRecoveryHdfs on OS X 10.9.3 w/Oracle Java 1.7.0_55 - here's the first exception (9 out of 9 non-ignored tests fail): [junit4] <JUnit4> says 你好! Master seed: 37AA21AA8F6886DF [junit4] Executing 1 suite with 1 JVM. [junit4] [junit4] Started J0 PID(36452@smb.local). [junit4] Suite: org.apache.solr.search.TestRecoveryHdfs [...] [junit4] 2> 7871 T118 oasc.CoreContainer.recordAndThrow ERROR Unable to create core: collection1 org.apache.solr.common.SolrException: Problem creating directory: solr_hdfs_home/collection1/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0/temp/solr.search.TestRecoveryHdfs-37AA21AA8F6886DF-001/init-core-data-001 [junit4] 2> at org.apache.solr.core.SolrCore.<init>(SolrCore.java:885) [junit4] 2> at org.apache.solr.core.SolrCore.<init>(SolrCore.java:649) [junit4] 2> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556) [junit4] 2> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261) [junit4] 2> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253) [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:262) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:262) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [junit4] 2> at java.lang.Thread.run(Thread.java:745) [junit4] 2> Caused by: java.lang.RuntimeException: Problem creating directory: solr_hdfs_home/collection1/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0/temp/solr.search.TestRecoveryHdfs-37AA21AA8F6886DF-001/init-core-data-001 [junit4] 2> at org.apache.solr.store.hdfs.HdfsDirectory.<init>(HdfsDirectory.java:87) [junit4] 2> at org.apache.solr.core.HdfsDirectoryFactory.create(HdfsDirectoryFactory.java:148) [junit4] 2> at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:351) [junit4] 2> at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:273) [junit4] 2> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:485) [junit4] 2> at org.apache.solr.core.SolrCore.<init>(SolrCore.java:791) [junit4] 2> ... 10 more [junit4] 2> Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0" "write") [junit4] 2> at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372) [junit4] 2> at java.security.AccessController.checkPermission(AccessController.java:559) [junit4] 2> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) [junit4] 2> at java.lang.SecurityManager.checkWrite(SecurityManager.java:979) [junit4] 2> at java.io.File.mkdir(File.java:1305) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) [junit4] 2> at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:584) [junit4] 2> at org.apache.solr.store.hdfs.HdfsDirectory.<init>(HdfsDirectory.java:63) [junit4] 2> ... 15 more [junit4] [...] [junit4] Tests with failures: [junit4] - org.apache.solr.search.TestRecoveryHdfs.testLogReplay [junit4] - org.apache.solr.search.TestRecoveryHdfs.testRemoveOldLogs [junit4] - org.apache.solr.search.TestRecoveryHdfs.testBufferingFlags [junit4] - org.apache.solr.search.TestRecoveryHdfs.testCleanShutdown [junit4] - org.apache.solr.search.TestRecoveryHdfs.testCorruptLog [junit4] - org.apache.solr.search.TestRecoveryHdfs.testVersionsOnRestart [junit4] - org.apache.solr.search.TestRecoveryHdfs.testRecoveryMultipleLogs [junit4] - org.apache.solr.search.TestRecoveryHdfs.testTruncatedLog [junit4] - org.apache.solr.search.TestRecoveryHdfs.testBuffering
          Hide
          Dawid Weiss added a comment -

          This looks like a screwup in Hadoop because it attempts to create all parent folders, including those it has no access to (and which MUST already exist at that time):

          Caused by: java.security.AccessControlException: access denied ("java.io.FilePermission" "/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0" "write")
             [junit4]   2> 	at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
             [junit4]   2> 	at java.security.AccessController.checkPermission(AccessController.java:559)
             [junit4]   2> 	at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
             [junit4]   2> 	at java.lang.SecurityManager.checkWrite(SecurityManager.java:979)
             [junit4]   2> 	at java.io.File.mkdir(File.java:1305)
             [junit4]   2> 	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427)
          ...
          

          Here is what this routine looks like:

            /**
             * Creates the specified directory hierarchy. Does not
             * treat existence as an error.
             */
            @Override
            public boolean mkdirs(Path f) throws IOException {
              if(f == null) {
                throw new IllegalArgumentException("mkdirs path arg is null");
              }
              Path parent = f.getParent();
              File p2f = pathToFile(f);
              if(parent != null) {
                File parent2f = pathToFile(parent);
                if(parent2f != null && parent2f.exists() && !parent2f.isDirectory()) {
                  throw new FileAlreadyExistsException("Parent path is not a directory: " 
                      + parent);
                }
              }
              return (parent == null || mkdirs(parent)) &&
                (p2f.mkdir() || p2f.isDirectory());
            }
          

          I think this is an error in Hadoop – they always mkdirs() on parent folders, even if they exist.

          Show
          Dawid Weiss added a comment - This looks like a screwup in Hadoop because it attempts to create all parent folders, including those it has no access to (and which MUST already exist at that time): Caused by: java.security.AccessControlException: access denied ( "java.io.FilePermission" "/Users/sarowe/svn/lucene/dev/trunk4/solr/build/solr-core/test/J0" "write" ) [junit4] 2> at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372) [junit4] 2> at java.security.AccessController.checkPermission(AccessController.java:559) [junit4] 2> at java.lang. SecurityManager .checkPermission( SecurityManager .java:549) [junit4] 2> at java.lang. SecurityManager .checkWrite( SecurityManager .java:979) [junit4] 2> at java.io.File.mkdir(File.java:1305) [junit4] 2> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:427) ... Here is what this routine looks like: /** * Creates the specified directory hierarchy. Does not * treat existence as an error. */ @Override public boolean mkdirs(Path f) throws IOException { if (f == null ) { throw new IllegalArgumentException( "mkdirs path arg is null " ); } Path parent = f.getParent(); File p2f = pathToFile(f); if (parent != null ) { File parent2f = pathToFile(parent); if (parent2f != null && parent2f.exists() && !parent2f.isDirectory()) { throw new FileAlreadyExistsException( "Parent path is not a directory: " + parent); } } return (parent == null || mkdirs(parent)) && (p2f.mkdir() || p2f.isDirectory()); } I think this is an error in Hadoop – they always mkdirs() on parent folders, even if they exist.
          Hide
          Steve Rowe added a comment -

          Yeah, that does look dumb. And pathToFile() absolutizes relative directories against the CWD, so using a relative dir won't help.

          I found a solution that allows all the tests to succeed on my box, though I admit it's totally cargo-culted from o.a.s.cloud.hdfs.StressHdfsTest:

          Index: solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java
          ===================================================================
          --- solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java	(revision 1599731)
          +++ solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java	(working copy)
          @@ -78,6 +78,7 @@
             @BeforeClass
             public static void beforeClass() throws Exception {
               dfsCluster = HdfsTestUtil.setupClass(createTempDir().getAbsolutePath());
          +    System.setProperty("solr.hdfs.home", dfsCluster.getURI().toString() + "/solr");
               hdfsUri = dfsCluster.getFileSystem().getUri().toString();
               
               try {
          

          Any objections to committing this? Mark Miller?

          Show
          Steve Rowe added a comment - Yeah, that does look dumb. And pathToFile() absolutizes relative directories against the CWD, so using a relative dir won't help. I found a solution that allows all the tests to succeed on my box, though I admit it's totally cargo-culted from o.a.s.cloud.hdfs.StressHdfsTest: Index: solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java =================================================================== --- solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java (revision 1599731) +++ solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java (working copy) @@ -78,6 +78,7 @@ @BeforeClass public static void beforeClass() throws Exception { dfsCluster = HdfsTestUtil.setupClass(createTempDir().getAbsolutePath()); + System .setProperty( "solr.hdfs.home" , dfsCluster.getURI().toString() + "/solr" ); hdfsUri = dfsCluster.getFileSystem().getUri().toString(); try { Any objections to committing this? Mark Miller ?
          Hide
          Mark Miller added a comment -

          That looks like a workaround for a test bug JIRA I opened a whe back. On my phone so hard to dig. Hdfs.home was being set to local fs rather than an hdfs location. Looks like your setting it correct after the above line sets it incorrectly to local fs.

          If that's the case, that fix actually belongs in the hdfs util call right above - instead of where it incorrectly sets the solr.hdfs.home. But feel free to just use your patch if you want and I'll clean it up when I resolve that issue.

          Show
          Mark Miller added a comment - That looks like a workaround for a test bug JIRA I opened a whe back. On my phone so hard to dig. Hdfs.home was being set to local fs rather than an hdfs location. Looks like your setting it correct after the above line sets it incorrectly to local fs. If that's the case, that fix actually belongs in the hdfs util call right above - instead of where it incorrectly sets the solr.hdfs.home. But feel free to just use your patch if you want and I'll clean it up when I resolve that issue.
          Hide
          Steve Rowe added a comment -

          But feel free to just use your patch if you want and I'll clean it up when I resolve that issue.

          Thanks, I'll do that.

          Show
          Steve Rowe added a comment - But feel free to just use your patch if you want and I'll clean it up when I resolve that issue. Thanks, I'll do that.
          Hide
          ASF subversion and git services added a comment -

          Commit 1600310 from Steve Rowe in branch 'dev/trunk'
          [ https://svn.apache.org/r1600310 ]

          LUCENE-5650: Reset solr.hdfs.home correctly to allow TestRecoveryHdfs tests to pass

          Show
          ASF subversion and git services added a comment - Commit 1600310 from Steve Rowe in branch 'dev/trunk' [ https://svn.apache.org/r1600310 ] LUCENE-5650 : Reset solr.hdfs.home correctly to allow TestRecoveryHdfs tests to pass
          Hide
          Hoss Man added a comment -

          Back in may Dawid Weiss mentioned letting this soak on trunk a bit before backporting ... did it slip through the cracks?

          FWIW: SOLR-6410 popped up on 4x but was already fixed on trunk as part of this issue, i'm going to backport just the key elements of this issue that related to that bug to 4x under the banner of SOLR-6410 in order to backport to branch_4_10 as well.

          Show
          Hoss Man added a comment - Back in may Dawid Weiss mentioned letting this soak on trunk a bit before backporting ... did it slip through the cracks? FWIW: SOLR-6410 popped up on 4x but was already fixed on trunk as part of this issue, i'm going to backport just the key elements of this issue that related to that bug to 4x under the banner of SOLR-6410 in order to backport to branch_4_10 as well.
          Hide
          ASF subversion and git services added a comment -

          Commit 1619947 from hossman@apache.org in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1619947 ]

          SOLR-6410: Ensure all Lookup instances are closed via CloseHook (merge r1596767 from LUCENE-5650 just for the solr/spelling/suggest paths; and merge r1619946 for the CHANGES.txt entry)

          Show
          ASF subversion and git services added a comment - Commit 1619947 from hossman@apache.org in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1619947 ] SOLR-6410 : Ensure all Lookup instances are closed via CloseHook (merge r1596767 from LUCENE-5650 just for the solr/spelling/suggest paths; and merge r1619946 for the CHANGES.txt entry)
          Hide
          ASF subversion and git services added a comment -

          Commit 1620054 from Ryan Ernst in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1620054 ]

          LUCENE-5650: Enforce read-only access to any path outside the temporary folder via security manager (merged r1596767, r1600310)

          Show
          ASF subversion and git services added a comment - Commit 1620054 from Ryan Ernst in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1620054 ] LUCENE-5650 : Enforce read-only access to any path outside the temporary folder via security manager (merged r1596767, r1600310)
          Hide
          ASF subversion and git services added a comment -

          Commit 1620055 from Ryan Ernst in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1620055 ]

          LUCENE-5650: Enforce read-only access to any path outside the temporary folder via security manager (merged r1596767, r1600310)

          Show
          ASF subversion and git services added a comment - Commit 1620055 from Ryan Ernst in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1620055 ] LUCENE-5650 : Enforce read-only access to any path outside the temporary folder via security manager (merged r1596767, r1600310)
          Hide
          ASF subversion and git services added a comment -

          Commit 1620056 from Ryan Ernst in branch 'dev/trunk'
          [ https://svn.apache.org/r1620056 ]

          LUCENE-5650: Move changes entry to reflect backport to 4x

          Show
          ASF subversion and git services added a comment - Commit 1620056 from Ryan Ernst in branch 'dev/trunk' [ https://svn.apache.org/r1620056 ] LUCENE-5650 : Move changes entry to reflect backport to 4x
          Hide
          ASF subversion and git services added a comment -

          Commit 1620057 from Ryan Ernst in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1620057 ]

          LUCENE-5650: Move changes entry to reflect backport to 4x

          Show
          ASF subversion and git services added a comment - Commit 1620057 from Ryan Ernst in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1620057 ] LUCENE-5650 : Move changes entry to reflect backport to 4x
          Hide
          Dawid Weiss added a comment -

          Thank you for doing the backport, Ryan!

          Show
          Dawid Weiss added a comment - Thank you for doing the backport, Ryan!
          Hide
          Anshum Gupta added a comment -

          Bulk close after 5.0 release.

          Show
          Anshum Gupta added a comment - Bulk close after 5.0 release.

            People

            • Assignee:
              Dawid Weiss
              Reporter:
              Ryan Ernst
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development