# Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

## Details

• Type: Improvement
• Status: Patch Available
• Priority: Major
• Resolution: Unresolved
• Affects Version/s: 3.0.0, 2.2.0
• Fix Version/s: None
• Component/s:
• Labels:
None
• Target Version/s:

## Description

FileSystem and FileContract aren't tested rigorously enough -while HDFS gets tested downstream, other filesystems, such as blobstore bindings, don't.

The only tests that are common are those of FileSystemContractTestBase, which HADOOP-9258 shows is incomplete.

I propose

1. writing more tests which clarify expected behavior
2. testing operations in the interface being in their own JUnit4 test classes, instead of one big test suite.
3. Having each FS declare via a properties file what behaviors they offer, such as atomic-rename, atomic-delete, umask, immediate-consistency -test methods can downgrade to skipped test cases if a feature is missing.

## Attachments

65 kB
Steve Loughran
146 kB
Steve Loughran
225 kB
Steve Loughran
248 kB
Steve Loughran
276 kB
Steve Loughran
295 kB
Steve Loughran

## Activity

Hide
Steve Loughran added a comment -

These tests must all be enabled/disabled dynamically by the FS specific subclasses, so that tests that can run against a filesystem for any reason (e.g. no S3 credentials) can be downgraded to a skip -which then appears in the test reports. This will make clear that tests were not run, whereas today the JUnit3-derived tests are named to not match the *Test pattern, and must be explicitly run by hand.

Show
Steve Loughran added a comment - These tests must all be enabled/disabled dynamically by the FS specific subclasses, so that tests that can run against a filesystem for any reason (e.g. no S3 credentials) can be downgraded to a skip -which then appears in the test reports. This will make clear that tests were not run, whereas today the JUnit3-derived tests are named to not match the *Test pattern, and must be explicitly run by hand.
Hide
Steve Loughran added a comment -

need to be able to generate HTML/PDF from markdown if the docs are to be done in markdown

Show
Steve Loughran added a comment - need to be able to generate HTML/PDF from markdown if the docs are to be done in markdown
Hide
Steve Loughran added a comment -

rename base JIRA

Show
Steve Loughran added a comment - rename base JIRA
Hide
Steve Loughran added a comment -

This is my first prototype of a contract-driven FS test suite. Every FS has to implement AbstractFSContract, which provides an FS factory, test working dir and report whether various options are supported (and eventually, limits). Options include things like supports-unix-permissions and is-case-sensitive, as well as test options like root-tests-enabled -that being a flag which, if set, enables tests to do things to a root dir like renaming and deleting it.

There is a contract for Local FS, which fail because the seek operations of local don't quite follow the expectations of HADOOP-9495, which is something to consider. That FS contract dynamically chooses case sensitivity and unix-permissions features based on the OS it is running.

Show
Steve Loughran added a comment - This is my first prototype of a contract-driven FS test suite. Every FS has to implement AbstractFSContract, which provides an FS factory, test working dir and report whether various options are supported (and eventually, limits). Options include things like supports-unix-permissions and is-case-sensitive , as well as test options like root-tests-enabled -that being a flag which, if set, enables tests to do things to a root dir like renaming and deleting it. There is a contract for Local FS, which fail because the seek operations of local don't quite follow the expectations of HADOOP-9495 , which is something to consider. That FS contract dynamically chooses case sensitivity and unix-permissions features based on the OS it is running.
Hide

-1 overall. Here are the results of testing the latest attachment
against trunk revision .

+1 @author. The patch does not contain any @author tags.

+1 tests included. The patch appears to include 15 new or modified test files.

-1 javac. The applied patch generated 1154 javac compiler warnings (more than the trunk's current 1153 warnings).

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

+1 contrib tests. The patch passed contrib unit tests.

This message is automatically generated.

Show
Hide
Steve Loughran added a comment -

This patch

1. specifies the a hadoop filesystem
2. begins the notion of FS contract tests, with a Hadoop-compatible filesystem modeled as a set of paths mapping to data and metadata.
3. defines a few initial tests: directory operations, seek operations (lifted from the HADOOP-8545 tests), and a test suite for FileSystem.concat()

#### Tries to define the behaviour of the FileSystem class rigorously

1. It does this with a model of a filesystem as a set of paths mapping to metadata and data, which lets us use set-theory to select parts of the filesystem, and in describing how a modified filesystem is derived from is predecessor as a result of an operation.
2. specifying the preconditions and postconditions of the main operations -based on the behaviour of HDFS.
3. begins the notion of FS contract tests, with a Hadoop-compatible filesystem modeled as a set of paths mapping to data and metadata.
4. defines a few initial tests: directory operations, seek operations (lifted from the HADOOP-8545 tests), and a test suite for FileSystem.concat()

Every filesystem to be tested must implement a subclass of AbstractFileSystemContract which contains a factory for their FileSystem
instances (and could easily do the same for FileContext, as well as a method boolean supports(feature:string) that returns true/false
for features like "supports-append". For the local fs and HDFS, the definition of supported features is done from XML config files, using
keys like fs.$filesystem.contract.supports-append. This allows the future option for these declarations to move into the core- XML files. The LocalFSContract class updates some of its supported features (is-case-sensitive, supports-unix-permissions) based on the features of underlying FS. Every contract also has a couple of test-only features: supports-root-tests (can you do things like rm / in a test run?) and a method to see if the tests are enabled (boolean enabled()). This lets contract test suites disable themselves if they can't run (i.e when the logon details for a blobstore aren't in the test configurations). The enabled() option is checked in the test setups; the other features can be tested in the specific tests -and all downgrade to a skipped test if not set. This makes it visible which tests have not actually run. #### Instantiates contract tests for the localFS and HDFS. Note that LocalFS fails the seek tests, as it downgrades negative seeks to a no-op. Having written these tests, I can see some limitations of the process. • the tests have to look for the loosest exception type coming back from a failure -e.g. IOException over EOFException. • needs a story for timeouts: we must have these to deal with blobstores &c, and either need a way to allow FS contracts to define these, let the FS specific tests define them, or just have some base timeouts large enough to deal with blobstores with operations like delete(dir) being O(children(dir)) #### Specification Syntax I'm not convinced the syntax for specifications is great, nor am I sure I'm even using it consistently. I've just had to do something minimal that fits into unformatted text in a .apt file: FileSystem.listStatus(Path P, PathFilter Filter) A PathFilter) is a predicate function that returns true iff the path P meets the filter's requirements. Preconditions ----------- #path must exist exists(FS, P) || throw FileNotFoundException ----------- Postconditions ----------- isFile(FS, P) && Filter(P) => [FileStatus(FS, P)] isFile(FS, P) && !Filter(P) => [] isDir(FS, P) => [all C in children(FS, P) where Filter(C) == true] ----------- This blurs C/Java symbols with bits of set theory, and trying to define what exceptions to throw on failed preconditions is something that really needs improving. If we could use LaTeX we could do proper set syntax, though we'd still need a consistent language for defining filesystem implementation behaviours. isFile(FS, P) \land \neg Filter(P) \Rightarrow \emptyset isDir(FS,P) \Rightarrow \left\{ \forall C \in children(FS, P) : Filter(C) \right} This is actually harder to read. One other tactic could be to move the specification entirely to javadocs, with something at the package level for the core model & notation, then add precond/postcond specifics as preformatted areas in the docs. This would keep the spec by the signature, increase likelihood of maintenance, and for those people who understood the syntax, be able to understand what the methods do. Show Steve Loughran added a comment - This patch specifies the a hadoop filesystem begins the notion of FS contract tests, with a Hadoop-compatible filesystem modeled as a set of paths mapping to data and metadata. defines a few initial tests: directory operations, seek operations (lifted from the HADOOP-8545 tests), and a test suite for FileSystem.concat() Tries to define the behaviour of the FileSystem class rigorously It does this with a model of a filesystem as a set of paths mapping to metadata and data, which lets us use set-theory to select parts of the filesystem, and in describing how a modified filesystem is derived from is predecessor as a result of an operation. specifying the preconditions and postconditions of the main operations -based on the behaviour of HDFS. begins the notion of FS contract tests, with a Hadoop-compatible filesystem modeled as a set of paths mapping to data and metadata. defines a few initial tests: directory operations, seek operations (lifted from the HADOOP-8545 tests), and a test suite for FileSystem.concat() Every filesystem to be tested must implement a subclass of AbstractFileSystemContract which contains a factory for their FileSystem instances (and could easily do the same for FileContext , as well as a method boolean supports(feature:string) that returns true/false for features like "supports-append" . For the local fs and HDFS, the definition of supported features is done from XML config files, using keys like fs.$filesystem.contract.supports-append . This allows the future option for these declarations to move into the core- XML files. The LocalFSContract class updates some of its supported features ( is-case-sensitive , supports-unix-permissions ) based on the features of underlying FS. Every contract also has a couple of test-only features: supports-root-tests (can you do things like rm / in a test run?) and a method to see if the tests are enabled ( boolean enabled() ). This lets contract test suites disable themselves if they can't run (i.e when the logon details for a blobstore aren't in the test configurations). The enabled() option is checked in the test setups; the other features can be tested in the specific tests -and all downgrade to a skipped test if not set. This makes it visible which tests have not actually run. Instantiates contract tests for the localFS and HDFS. Note that LocalFS fails the seek tests, as it downgrades negative seeks to a no-op. Having written these tests, I can see some limitations of the process. the tests have to look for the loosest exception type coming back from a failure -e.g. IOException over EOFException . needs a story for timeouts: we must have these to deal with blobstores &c, and either need a way to allow FS contracts to define these, let the FS specific tests define them, or just have some base timeouts large enough to deal with blobstores with operations like delete(dir) being O(children(dir)) Specification Syntax I'm not convinced the syntax for specifications is great, nor am I sure I'm even using it consistently. I've just had to do something minimal that fits into unformatted text in a .apt file: FileSystem.listStatus(Path P, PathFilter Filter) A PathFilter) is a predicate function that returns true iff the path P meets the filter's requirements. Preconditions ----------- #path must exist exists(FS, P) || throw FileNotFoundException ----------- Postconditions ----------- isFile(FS, P) && Filter(P) => [FileStatus(FS, P)] isFile(FS, P) && !Filter(P) => [] isDir(FS, P) => [all C in children(FS, P) where Filter(C) == true ] ----------- This blurs C/Java symbols with bits of set theory, and trying to define what exceptions to throw on failed preconditions is something that really needs improving. If we could use LaTeX we could do proper set syntax, though we'd still need a consistent language for defining filesystem implementation behaviours. isFile(FS, P) \land \neg Filter(P) \Rightarrow \emptyset isDir(FS,P) \Rightarrow \left\{ \forall C \in children(FS, P) : Filter(C) \right} This is actually harder to read. One other tactic could be to move the specification entirely to javadocs, with something at the package level for the core model & notation, then add precond/postcond specifics as preformatted areas in the docs. This would keep the spec by the signature, increase likelihood of maintenance, and for those people who understood the syntax, be able to understand what the methods do.
Hide
Steve Loughran added a comment -

contract tests for local and HDFS. Local seeks wrong

Show
Steve Loughran added a comment - contract tests for local and HDFS. Local seeks wrong
Hide

-1 overall. Here are the results of testing the latest attachment
against trunk revision .

+1 @author. The patch does not contain any @author tags.

+1 tests included. The patch appears to include 22 new or modified test files.

-1 javac. The applied patch generated 1154 javac compiler warnings (more than the trunk's current 1153 warnings).

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

+1 contrib tests. The patch passed contrib unit tests.

This message is automatically generated.

Show
Hide
Mostafa Elhemali added a comment -

Thanks Steve for starting this. Personally I'm really glad to see more abstract testing-against-the-contract efforts for the file systems in Hadoop so it's great to see this. My comments upon first reading of the code (I didn't read the specs yet so no comments on those):

1. Personal aesthetic point: I'd have personally preferred if the contract was not in XML config, but just in code; and that isSupported() just took an enum that provided the feature. Code is much easier and simpler to read in this case, and there's no real need to "configure" a contract for a file system. And already you modify the configuration based on code in e.g. LocalFSContract, so it's just easier to read if it was all in code.
2. Typo: SUPPORTS_CONTAT should be SUPPORTS_CONCAT
3. In assertPathExists(), I think you meant to include the ls() content in the fail() message rather than just call it.
4. In testConcatOnSel() - missed failing if an error isn't thrown.
5. It would be great to catch() more specific exceptions in the Concat tests - i.e. have the contract specify what exception to expect as well as just an exception being thrown.
Show
Mostafa Elhemali added a comment - Thanks Steve for starting this. Personally I'm really glad to see more abstract testing-against-the-contract efforts for the file systems in Hadoop so it's great to see this. My comments upon first reading of the code (I didn't read the specs yet so no comments on those): Personal aesthetic point: I'd have personally preferred if the contract was not in XML config, but just in code; and that isSupported() just took an enum that provided the feature. Code is much easier and simpler to read in this case, and there's no real need to "configure" a contract for a file system. And already you modify the configuration based on code in e.g. LocalFSContract, so it's just easier to read if it was all in code. Typo: SUPPORTS_CONTAT should be SUPPORTS_CONCAT In assertPathExists(), I think you meant to include the ls() content in the fail() message rather than just call it. In testConcatOnSel() - missed failing if an error isn't thrown. It would be great to catch() more specific exceptions in the Concat tests - i.e. have the contract specify what exception to expect as well as just an exception being thrown.
Hide
Steve Loughran added a comment -

This is up as a work in progress; if you add the right details to the auth-keys.xml file in common/src/test/resources it will test S3n and ftp filesystems as well as local; HDFS adds its tests too.

Some variances

1. HDFS won't let you rm / even if empty, or rm -rf /.
2. FTP throws FileNotFoundException if you try to delete a file that isn't there.
3. FTP (the back end?) will overwrite a directory with a file creation.
4. RawLocalFS.rename() will try to do a File.rename() operation and fall back to a copy, this is the outstanding issue from HDFS-303, "What should consistent rename actions be"
5. S3n will not only let you overwrite a dir with a file (side effect of how blobstores use 0-byte files as dir markers), it will do this even if the destination has children. It could check for that
Show
Steve Loughran added a comment - This is up as a work in progress; if you add the right details to the auth-keys.xml file in common/src/test/resources it will test S3n and ftp filesystems as well as local; HDFS adds its tests too. Some variances HDFS won't let you rm / even if empty, or rm -rf /. FTP throws FileNotFoundException if you try to delete a file that isn't there. FTP (the back end?) will overwrite a directory with a file creation. RawLocalFS.rename() will try to do a File.rename() operation and fall back to a copy, this is the outstanding issue from HDFS-303 , "What should consistent rename actions be" S3n will not only let you overwrite a dir with a file (side effect of how blobstores use 0-byte files as dir markers), it will do this even if the destination has children. It could check for that
Hide
Steve Loughran added a comment -

marking as rename needs to use definition of HADOOP-6240 for its definition & tests

Show
Steve Loughran added a comment - marking as rename needs to use definition of HADOOP-6240 for its definition & tests
Hide
Steve Loughran added a comment -

The latest patch now has tests for : create, open, delete, mkdir and seek. I'm ignoring the rename tests as I need to fully understand what HADOOP-6240 has defined first.

### seek

1. I've been through the code and fixed wherever a -ve seek was either ignored or raised an IOException into an EOFException. This
included changes to ChecksumFileSystem, RawLocalFileSystem, BufferedFSInputStream (which also handles a null inner stream without NPEing), FSInputChecker.java
2. pulled in the test from HADOOP-9307 to do many random seeks and reads; the #of seeks is configurable, so that remote blobstore tests don't take forever unless you want it to (or are running them in-cluster)
3. some filesystems let you seek over a closed stream. I've fixed the NPE in BufferedFSInputStream, not sure it is worth the
effort of fixing this everywhere.

### NativeS3 issues/changes changes

• Jets3tNativeFileSystemStore converts the relevant S3 error code "InvalidRange" into an EOFException
• Amazon S3 rejects a seek(0) in a zero-byte file; not fixed yet as you need to know the file length to do it up front. Maybe an EOFException on a seek could be downgraded to a no-op if the seek offset is 0.
• throws a FileAlreadyExistsException if trying to create a file over an existing one, and !overwrite
• I'm deliberately skipping the test where we expect creating a file over a dir to fail even if overwrite is true, because blobstores use 0-byte files as a pretend directory.
• It's failing a test which creates overwrites a directory which has children. This could be picked up (look for children if overwriting a 0-byte file)
• It fails a test that a newly created file exists while the write is still in progress; as the blobstores only write at the end of the file, it doesn't. this is potentially a race condition -we could create a marker file here and overwrite it on the close.

### FTP

I'll cover that in in HADOOP-9712 as its mostly bugs in a niche FS.

### LocalFS

• throws FileNotFoundException when attempting to create a dir where the destination or a parent is a directory. This happens inside the JDK and has to be a WONTFIX, unless it is caught and wrapped.
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.localfs.TestLocalCreateContract)  Time elapsed: 38 sec  <<< ERROR!
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:227) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:223)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:384) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:888) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:869) at org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:130) at org.apache.hadoop.fs.contract.AbstractCreateContractTest.testOverwriteNonEmptyDirectory(AbstractCreateContractTest.java:115) 1. if you call mkdir(path-to-a-file) you get a 0 return code -but no exception is thrown. This is inconsistent with HDFS. testNoMkdirOverFile(org.apache.hadoop.fs.contract.localfs.TestLocalDirectoryContract) Time elapsed: 46 sec <<< FAILURE! java.lang.AssertionError: mkdirs succeeded over a file: ls file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testNoMkdirOverFile[00] RawLocalFileStatus{path=file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testNoMkdirOverFile; isDirectory=false; length=1024; replication=1; blocksize=33554432; modification_time=1373457007000; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false} at org.junit.Assert.fail(Assert.java:93) at org.apache.hadoop.fs.contract.AbstractDirectoryContractTest.testNoMkdirOverFile(AbstractDirectoryContractTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)

### HDFS Ambiguities

*you can't rm / an empty root dir, or rm -rf / a non-empty root dir. This may be a good design choice for safety; not consistent with
the behaviours of all (tested) filesystems. I haven't tested FTP or local FS though, for obvious reasons (these tests are only run if you subclass the relevant test, and explicitly enable it)

}} is thrown instead of {{

{ParentNotDirectoryException}

}} when a mkdir is make with a parent file

ttestMkdirOverParentFile(org.apache.hadoop.fs.contract.hdfs.TestHDFSDirectoryContract)  Time elapsed: 48 sec  <<< ERROR!
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48089)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1880) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1876) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1874)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48089)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1880) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1876) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1874)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.sun.proxy.$Proxy16.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:467) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2322) ... 37 more Show Steve Loughran added a comment - The latest patch now has tests for : create, open, delete, mkdir and seek. I'm ignoring the rename tests as I need to fully understand what HADOOP-6240 has defined first. seek I've been through the code and fixed wherever a -ve seek was either ignored or raised an IOException into an EOFException . This included changes to ChecksumFileSystem , RawLocalFileSystem , BufferedFSInputStream (which also handles a null inner stream without NPEing), FSInputChecker.java pulled in the test from HADOOP-9307 to do many random seeks and reads; the #of seeks is configurable, so that remote blobstore tests don't take forever unless you want it to (or are running them in-cluster) some filesystems let you seek over a closed stream. I've fixed the NPE in BufferedFSInputStream , not sure it is worth the effort of fixing this everywhere. NativeS3 issues/changes changes Jets3tNativeFileSystemStore converts the relevant S3 error code "InvalidRange" into an EOFException Amazon S3 rejects a seek(0) in a zero-byte file; not fixed yet as you need to know the file length to do it up front. Maybe an EOFException on a seek could be downgraded to a no-op if the seek offset is 0. throws a FileAlreadyExistsException if trying to create a file over an existing one, and !overwrite I'm deliberately skipping the test where we expect creating a file over a dir to fail even if overwrite is true, because blobstores use 0-byte files as a pretend directory. It's failing a test which creates overwrites a directory which has children. This could be picked up (look for children if overwriting a 0-byte file) It fails a test that a newly created file exists while the write is still in progress; as the blobstores only write at the end of the file, it doesn't. this is potentially a race condition -we could create a marker file here and overwrite it on the close. FTP I'll cover that in in HADOOP-9712 as its mostly bugs in a niche FS. LocalFS throws FileNotFoundException when attempting to create a dir where the destination or a parent is a directory. This happens inside the JDK and has to be a WONTFIX, unless it is caught and wrapped. testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.localfs.TestLocalCreateContract) Time elapsed: 38 sec <<< ERROR! java.io.FileNotFoundException: /Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testOverwriteNonEmptyDirectory (File exists) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:227) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:223) at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:286) at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:273) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:384) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:888) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:869) at org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:130) at org.apache.hadoop.fs.contract.AbstractCreateContractTest.testOverwriteNonEmptyDirectory(AbstractCreateContractTest.java:115) if you call mkdir(path-to-a-file) you get a 0 return code -but no exception is thrown. This is inconsistent with HDFS. testNoMkdirOverFile(org.apache.hadoop.fs.contract.localfs.TestLocalDirectoryContract) Time elapsed: 46 sec <<< FAILURE! java.lang.AssertionError: mkdirs succeeded over a file: ls file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testNoMkdirOverFile[00] RawLocalFileStatus{path=file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/testNoMkdirOverFile; isDirectory= false ; length=1024; replication=1; blocksize=33554432; modification_time=1373457007000; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink= false } at org.junit.Assert.fail(Assert.java:93) at org.apache.hadoop.fs.contract.AbstractDirectoryContractTest.testNoMkdirOverFile(AbstractDirectoryContractTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) HDFS Ambiguities *you can't rm / an empty root dir, or rm -rf / a non-empty root dir. This may be a good design choice for safety; not consistent with the behaviours of all (tested) filesystems. I haven't tested FTP or local FS though, for obvious reasons (these tests are only run if you subclass the relevant test, and explicitly enable it) {{ {FileAlreadyExistsException} }} is thrown instead of {{ {ParentNotDirectoryException} }} when a mkdir is make with a parent file ttestMkdirOverParentFile(org.apache.hadoop.fs.contract.hdfs.TestHDFSDirectoryContract) Time elapsed: 48 sec <<< ERROR! org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a directory: /test/testMkdirOverParentFile testMkdirOverParentFile at org.apache.hadoop.hdfs.server.namenode.FSDirectory.mkdirs(FSDirectory.java:1906) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3182) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3141) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3114) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:692) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:502) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48089) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1880) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1876) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1874) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2324) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2293) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:568) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.fs.contract.AbstractDirectoryContractTest.testMkdirOverParentFile(AbstractDirectoryContractTest.java:95) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): Parent path is not a directory: /test/testMkdirOverParentFile testMkdirOverParentFile at org.apache.hadoop.hdfs.server.namenode.FSDirectory.mkdirs(FSDirectory.java:1906) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3182) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3141) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3114) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:692) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:502) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48089) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1880) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1876) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1874) at org.apache.hadoop.ipc.Client.call(Client.java:1314) at org.apache.hadoop.ipc.Client.call(Client.java:1266) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy16.mkdirs(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:82) at com.sun.proxy.$Proxy16.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:467) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2322) ... 37 more
Hide
Steve Loughran added a comment -

patch which contains the tests, though the ftp and s3n tests won't run unless test filesystems are provided, only local and HDFS, which will show up some of the ambiguities

Show
Steve Loughran added a comment - patch which contains the tests, though the ftp and s3n tests won't run unless test filesystems are provided, only local and HDFS, which will show up some of the ambiguities
Hide

-1 overall. Here are the results of testing the latest attachment
against trunk revision .

+1 @author. The patch does not contain any @author tags.

+1 tests included. The patch appears to include 52 new or modified test files.

-1 javac. The applied patch generated 1154 javac compiler warnings (more than the trunk's current 1153 warnings).

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

+1 contrib tests. The patch passed contrib unit tests.

This message is automatically generated.

Show
Hide
Steve Loughran added a comment -

1. SwiftFS contract (now that HADOOP-8545 is in)
2. appends contract test, with HDFS the
only tested implementation.

There's an issue with append that this test shows up (it fails), which is: what happens if a file that is being appended to is renamed?

In HDFS, the answer appears to be "it keeps the old name", though the test doesn't explore in detail what has happened.

Show
Steve Loughran added a comment - This updates the tests with SwiftFS contract (now that HADOOP-8545 is in) appends contract test, with HDFS the only tested implementation. There's an issue with append that this test shows up (it fails), which is: what happens if a file that is being appended to is renamed? In HDFS, the answer appears to be "it keeps the old name", though the test doesn't explore in detail what has happened.
Hide

-1 overall. Here are the results of testing the latest attachment
against trunk revision .

+1 @author. The patch does not contain any @author tags.

+1 tests included. The patch appears to include 65 new or modified test files.

-1 javac. The applied patch generated 1536 javac compiler warnings (more than the trunk's current 1535 warnings).

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

-1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

+1 contrib tests. The patch passed contrib unit tests.

This message is automatically generated.

Show
Hide
Steve Loughran added a comment -

Patch with spec consistent with almost all HDFS behaviour, s3 and localfs set up to throw tighter exceptions on some failures, and to throw EOFException on seek(-negative).

## LocalFS behaviours to resolve

1. attempting to mkdir over an existing file returns false instead of raising an exception.

propose: raise an exception. Nobody ever checks the return code from mkdirs after all -so we should uprate it to be on a par with HDFS.

2. you can seek on an stream valid after a close()

options:

• fix
• ignore on the basis reads or writes will fail when attempted

3. if you rename a file over an existing file, the operation succeeds. This is what bash does.

propose: document as the less preferred option; relax test to permit with a warn

## HDFS contract test behaviours

1. if you open a stream for append, rename the file and then do the append, the old filename remains.

Propose: specify the outcome as "undefined"

2. if you attempt to rename a file that doesn't exist to a path in the same directory, it returns false, rather than raising a FileNotFoundException

I'm assuming here that the dest path is being checked before the source. I'd consider this an error

3. delete("/", true) returns false and doesn't delete anything

I've documented this as valid behaviour, and noted it is what HDFS does.

Show
Steve Loughran added a comment - Patch with spec consistent with almost all HDFS behaviour, s3 and localfs set up to throw tighter exceptions on some failures, and to throw EOFException on seek(-negative). LocalFS behaviours to resolve 1. attempting to mkdir over an existing file returns false instead of raising an exception. propose: raise an exception. Nobody ever checks the return code from mkdirs after all -so we should uprate it to be on a par with HDFS. 2. you can seek on an stream valid after a close() options: fix ignore on the basis reads or writes will fail when attempted 3. if you rename a file over an existing file, the operation succeeds. This is what bash does. propose: document as the less preferred option; relax test to permit with a warn HDFS contract test behaviours 1. if you open a stream for append, rename the file and then do the append, the old filename remains. Propose: specify the outcome as "undefined" 2. if you attempt to rename a file that doesn't exist to a path in the same directory, it returns false, rather than raising a FileNotFoundException I'm assuming here that the dest path is being checked before the source. I'd consider this an error 3. delete("/", true) returns false and doesn't delete anything I've documented this as valid behaviour, and noted it is what HDFS does.
Hide

-1 overall. Here are the results of testing the latest attachment
against trunk revision .

+1 @author. The patch does not contain any @author tags.

+1 tests included. The patch appears to include 65 new or modified test files.

-1 javac. The applied patch generated 1547 javac compiler warnings (more than the trunk's current 1546 warnings).

+1 eclipse:eclipse. The patch built with eclipse:eclipse.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

+1 contrib tests. The patch passed contrib unit tests.

This message is automatically generated.

Show

## People

• Assignee:
Steve Loughran
Reporter:
Steve Loughran
0 Vote for this issue
Watchers:
17 Start watching this issue

## Dates

• Created:
Updated:

## Time Tracking

Estimated:
48h
Remaining:
48h
Logged:
Not Specified