Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.6.0
    • Component/s: libhdfs
    • Labels:
      None
    • Environment:

      Windows, Visual Studio 2008

    • Target Version/s:
    • Hadoop Flags:
      Reviewed
    • Release Note:
      The libhdfs C API is now supported on Windows.

      Description

      The current C code in libhdfs is written using C99 conventions and also uses a few POSIX specific functions such as hcreate, hsearch, and pthread mutex locks. To compile it using Visual Studio would require a conversion of the code in hdfsJniHelper.c and hdfs.c to C89 and replacement/reimplementation of the POSIX functions. The code also uses the stdint.h header, which is not part of the original C89, but there exists what appears to be a BSD licensed reimplementation written to be compatible with MSVC floating around. I have already done the other necessary conversions, as well as created a simplistic hash bucket for use with hcreate and hsearch and successfully built a DLL of libhdfs. Further testing is needed to see if it is usable by other programs to actually access hdfs, which will likely happen in the next few weeks as the Condor Project continues with its file transfer work.

      In the process, I've removed a few what I believe are extraneous consts and also fixed an incorrect array initialization where someone was attempting to initialize with something like this: JavaVMOption options[noArgs]; where noArgs was being incremented in the code above. This was in the hdfsJniHelper.c file, in the getJNIEnv function.

      1. HDFS-573.4.patch
        164 kB
        Chris Nauroth
      2. HDFS-573.3.patch
        165 kB
        Chris Nauroth
      3. HDFS-573.2.patch
        165 kB
        Chris Nauroth
      4. HDFS-573.1.patch
        144 kB
        Chris Nauroth

        Issue Links

          Activity

          Ziliang Guo created issue -
          Hide
          dhruba borthakur added a comment -

          Is there a way to create a new file that contains finctions that implements hsearch/hcreate etc for Windows? in that case, we can continue to use the current code on Linux.

          Show
          dhruba borthakur added a comment - Is there a way to create a new file that contains finctions that implements hsearch/hcreate etc for Windows? in that case, we can continue to use the current code on Linux.
          Hide
          Ziliang Guo added a comment -

          I should have been more explicit. That is exactly what I have done for
          hcreate/hsearch. The header with the functions is only included if WIN32 is
          defined. I also modified the locking macros in a similar #ifdef statement
          to the appropriate Windows locks. The current code should continue to work
          on Linux, barring a screwup in my moving the variable declarations to the
          top of the function as required in C89. However, there was a lot of moving
          things around.

          Show
          Ziliang Guo added a comment - I should have been more explicit. That is exactly what I have done for hcreate/hsearch. The header with the functions is only included if WIN32 is defined. I also modified the locking macros in a similar #ifdef statement to the appropriate Windows locks. The current code should continue to work on Linux, barring a screwup in my moving the variable declarations to the top of the function as required in C89. However, there was a lot of moving things around.
          Hide
          dhruba borthakur added a comment -

          OK, thanks for the explanation.

          Show
          dhruba borthakur added a comment - OK, thanks for the explanation.
          Hide
          Ziliang Guo added a comment -

          Fasial has tested the current code on Linux and besides one minor fix it
          seems to work the same as before, so at least nothing seems to have broken
          during the reworking. Considering how extensive the changes were, do you
          guys want a diff or just a copy of the current source code as is?

          Show
          Ziliang Guo added a comment - Fasial has tested the current code on Linux and besides one minor fix it seems to work the same as before, so at least nothing seems to have broken during the reworking. Considering how extensive the changes were, do you guys want a diff or just a copy of the current source code as is?
          Hide
          dhruba borthakur added a comment -

          Some of the details are in http://wiki.apache.org/hadoop/HowToContribute.

          You have to create a "svn diff" in your workspace and then attaching it as a text file to this JIRA. Also, please see if you can run the libhdfs unit test.

          Show
          dhruba borthakur added a comment - Some of the details are in http://wiki.apache.org/hadoop/HowToContribute . You have to create a "svn diff" in your workspace and then attaching it as a text file to this JIRA. Also, please see if you can run the libhdfs unit test.
          Hide
          Faisal Khan added a comment -

          I ran unit tests on Ziliang's patch for libhdfs on Linux and here is the output http://pages.cs.wisc.edu/~faisal/libhdfs_testresult.txt . Tests look ok.

          Show
          Faisal Khan added a comment - I ran unit tests on Ziliang's patch for libhdfs on Linux and here is the output http://pages.cs.wisc.edu/~faisal/libhdfs_testresult.txt . Tests look ok.
          Hide
          dhruba borthakur added a comment -

          > ran unit tests on Ziliang's patch for libhdfs

          where is the patch file?

          Show
          dhruba borthakur added a comment - > ran unit tests on Ziliang's patch for libhdfs where is the patch file?
          Hide
          Ziliang Guo added a comment -

          Faisal and I both work for Condor so what he actually did was run the tests
          on the stuff we have sitting in the repo, not some generated patch. I'll be
          generating that patch hopefully next week, as school has started for me and
          I had to deal with that this week.

          Show
          Ziliang Guo added a comment - Faisal and I both work for Condor so what he actually did was run the tests on the stuff we have sitting in the repo, not some generated patch. I'll be generating that patch hopefully next week, as school has started for me and I had to deal with that this week.
          Hide
          Stephen Bovy added a comment -

          Darn, I just re-did all the work you guys did !!

          Is their anyone out there still interested in this ??

          What hashing fucntions did you use ??

          I usesd ut-hash

          Any suggestions would be appreciated

          It is a pain in the but making "c" code look like C code is supposed to look !!

          I worked off the 2.0 branch and unfortunately more linux only anachronisisms were added

          Hard-coded linux only err-no yikes !!

          I have some serious performance questions about the hashing functions

          Why not use Thread Local storage for the hashing table , then we can avoid the performance hit
          from locks we do not need to use ???

          The jvm init code should not be imbedded into every function call many of these functions will be used in loops

          The jvm init should be in a stand alone function that is called only once to init the library.

          Then we can use a global variable for the jvm * and the fucntion to discover the count of
          jvm's is not needed .

          The jvm attach functionallity should only be invoked if threads are used otherwise it is another
          useless perfromance killer

          Show
          Stephen Bovy added a comment - Darn, I just re-did all the work you guys did !! Is their anyone out there still interested in this ?? What hashing fucntions did you use ?? I usesd ut-hash Any suggestions would be appreciated It is a pain in the but making "c" code look like C code is supposed to look !! I worked off the 2.0 branch and unfortunately more linux only anachronisisms were added Hard-coded linux only err-no yikes !! I have some serious performance questions about the hashing functions Why not use Thread Local storage for the hashing table , then we can avoid the performance hit from locks we do not need to use ??? The jvm init code should not be imbedded into every function call many of these functions will be used in loops The jvm init should be in a stand alone function that is called only once to init the library. Then we can use a global variable for the jvm * and the fucntion to discover the count of jvm's is not needed . The jvm attach functionallity should only be invoked if threads are used otherwise it is another useless perfromance killer
          Allen Wittenauer made changes -
          Field Original Value New Value
          Link This issue duplicates HDFS-5642 [ HDFS-5642 ]
          Chris Nauroth made changes -
          Assignee Chris Nauroth [ cnauroth ]
          Hide
          Chris Nauroth added a comment -

          This patch gets the current trunk/branch-2 libhdfs source code compiling and working on Windows. Linux-specific code has been either eliminated in favor of something platform-agnostic, or ported to use the corresponding Windows system calls. It's a large patch, but unfortunately, I don't see a logical way to break it into smaller pieces.

          Instead of using a lot of conditional compilation like we do in libhadoop.so/hadoop.dll, the approach is to split platform-specific code into platform-specific files. CMake selects the correct files for the platform at build time. I think this yields more legible code. I modeled the source tree structure after what OpenJDK uses (/os/<platform>).

          All automated tests pass on both Linux and Windows, except for zero-copy which isn't yet supported on Windows. In addition to the automated tests, I manually ran test_libhdfs_ops against live clusters running on both Linux and Windows.

          Here are details on a couple of specific points:

          • BUILDING.txt
            • I used this opportunity as a testbed for CMake on Windows, and it worked out great. We might consider doing the same for hadoop-common later instead of checking in .vcproj files with logic that duplicates the CMake logic. I've updated the build instructions to indicate that CMake is a requirement on Windows now.
          • pom.xml
            • Add steps to trigger CMake build on Windows.
            • Refactored logic of native_tests to use an Ant macro.
            • I noticed that test_libhdfs_zerocopy wasn't actually being run, and none of the tests were running with libhadoop.so/hadoop.dll, so I took the opportunity to fix that. test_libhdfs_zerocopy only runs on Linux, because Windows doesn't yet support short-circuit reads, and therefore cannot support zero-copy.
          • CMakeLists.txt
            • Parameterized various build steps for POSIX vs. Windows platform differences.
            • Don't compile posix_util.c. Instead, compile it in fuse-dfs, which was the only thing actually using it. This way, we don't need to port code that isn't really used in libhdfs.
          • htable.c/htable.h
            • libhdfs keeps a very small hash table mapping class names to class references. This had been implemented using the Linux-specific hcreate and hsearch functions. The simplest solution was to take this hash table code from the HADOOP-10388 branch. These files are identical to the code on the feature branch, where it's already been code reviewed and +1'd once.
          • jni_helper.c
            • Removed the hdfsTls struct. We can store the JNIEnv pointer directly into thread-local storage, so we don't need this container struct.
          • mutexes.c
            • The Windows version needs to do some linker trickery to guarantee initialization of each CRITICAL_SECTION. The comments explain this in detail.
          • thread_local_storage.c
            • In the Windows version, it was pretty challenging to recreate the logic of using a pthreads thread-local storage key destructor to detach the thread from the JVM on exit. Windows doesn't offer a simple API for hooking onto a thread shutdown event, but the portable executable format does define a place for thread-local storage callbacks. This involves more linker trickery. Details are in the comments.
          • Stop using C99 constructs and stick to C89 in various files.
            • Declare local variables at the top of the function.
            • Don't use designated initializers on structs.
            • Don't use variable-length arrays.
          • Clean up warnings in various files.
            • implicit conversions
            • losses of precision
            • assignments from conditionals
          • Several files needed to rename internal constants that clashed with names in Windows headers.

          libwebhdfs is not covered in this patch. That would need to be handled separately.

          Similarly, vecsum is not covered in this patch. We'd need to port the sys/mman.h functions to get that working.

          fuse-dfs is unchanged. I believe fuse isn't supported on Windows.

          Show
          Chris Nauroth added a comment - This patch gets the current trunk/branch-2 libhdfs source code compiling and working on Windows. Linux-specific code has been either eliminated in favor of something platform-agnostic, or ported to use the corresponding Windows system calls. It's a large patch, but unfortunately, I don't see a logical way to break it into smaller pieces. Instead of using a lot of conditional compilation like we do in libhadoop.so/hadoop.dll, the approach is to split platform-specific code into platform-specific files. CMake selects the correct files for the platform at build time. I think this yields more legible code. I modeled the source tree structure after what OpenJDK uses (/os/<platform>). All automated tests pass on both Linux and Windows, except for zero-copy which isn't yet supported on Windows. In addition to the automated tests, I manually ran test_libhdfs_ops against live clusters running on both Linux and Windows. Here are details on a couple of specific points: BUILDING.txt I used this opportunity as a testbed for CMake on Windows, and it worked out great. We might consider doing the same for hadoop-common later instead of checking in .vcproj files with logic that duplicates the CMake logic. I've updated the build instructions to indicate that CMake is a requirement on Windows now. pom.xml Add steps to trigger CMake build on Windows. Refactored logic of native_tests to use an Ant macro. I noticed that test_libhdfs_zerocopy wasn't actually being run, and none of the tests were running with libhadoop.so/hadoop.dll, so I took the opportunity to fix that. test_libhdfs_zerocopy only runs on Linux, because Windows doesn't yet support short-circuit reads, and therefore cannot support zero-copy. CMakeLists.txt Parameterized various build steps for POSIX vs. Windows platform differences. Don't compile posix_util.c. Instead, compile it in fuse-dfs, which was the only thing actually using it. This way, we don't need to port code that isn't really used in libhdfs. htable.c/htable.h libhdfs keeps a very small hash table mapping class names to class references. This had been implemented using the Linux-specific hcreate and hsearch functions. The simplest solution was to take this hash table code from the HADOOP-10388 branch. These files are identical to the code on the feature branch, where it's already been code reviewed and +1'd once. jni_helper.c Removed the hdfsTls struct. We can store the JNIEnv pointer directly into thread-local storage, so we don't need this container struct. mutexes.c The Windows version needs to do some linker trickery to guarantee initialization of each CRITICAL_SECTION . The comments explain this in detail. thread_local_storage.c In the Windows version, it was pretty challenging to recreate the logic of using a pthreads thread-local storage key destructor to detach the thread from the JVM on exit. Windows doesn't offer a simple API for hooking onto a thread shutdown event, but the portable executable format does define a place for thread-local storage callbacks. This involves more linker trickery. Details are in the comments. Stop using C99 constructs and stick to C89 in various files. Declare local variables at the top of the function. Don't use designated initializers on structs. Don't use variable-length arrays. Clean up warnings in various files. implicit conversions losses of precision assignments from conditionals Several files needed to rename internal constants that clashed with names in Windows headers. libwebhdfs is not covered in this patch. That would need to be handled separately. Similarly, vecsum is not covered in this patch. We'd need to port the sys/mman.h functions to get that working. fuse-dfs is unchanged. I believe fuse isn't supported on Windows.
          Chris Nauroth made changes -
          Attachment HDFS-573.1.patch [ 12658964 ]
          Chris Nauroth made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Chris Nauroth made changes -
          Target Version/s 3.0.0, 2.6.0 [ 12320356, 12327181 ]
          Component/s libhdfs [ 12313126 ]
          Component/s hdfs-client [ 12312928 ]
          Chris Nauroth made changes -
          Link This issue relates to HADOOP-10903 [ HADOOP-10903 ]
          Hide
          Colin Patrick McCabe added a comment -

          Thanks for looking at this, Chris. Looks pretty good.

          /* Use gcc type-checked format arguments.  This is not supported on Windows. */
          #ifdef _WIN32
          #define TYPE_CHECKED_PRINTF_FORMAT(formatArg, varArgs)
          #else
          #define TYPE_CHECKED_PRINTF_FORMAT(formatArg, varArgs) \
            __attribute__((format(printf, formatArg, varArgs)))
          #endif
          

          Let's put this in platform.h rather than exception.h. It will be useful in a lot of different spots.

              const tTime NO_CHANGE = -1;
          

          Let's make this static while we're moving it.

              const char *cPathName;
              const char* cUserName;
              const char* cGroupName;
          

          nit: stars should be next to the variable name

          jni_helper.c: This hash table stuff is not quite correct. For instance, here:

          static int hashTableInit(void)
          {
              if (!gClassRefHTable) {
                  LOCK_HASH_TABLE();
          

          You're checking gClassRefHTable without the lock, so there's no guarantee that another thread's changes will be visible to you.

          The code here needs some fixing (and needed some fixing even before your patch-- this isn't something you introduced.) You can see that there's a potential double insert issue here:

          jthrowable globalClassReference(const char *className, JNIEnv *env, jclass *out)
          {
              jclass clsLocalRef;
              jclass cls = searchEntryFromTable(className);
              if (cls) {
                  *out = cls;
                  return NULL;
              }
              <===== POINT 1
              clsLocalRef = (*env)->FindClass(env,className);
              if (clsLocalRef == NULL) {
                  return getPendingExceptionAndClear(env);
              }
              cls = (*env)->NewGlobalRef(env, clsLocalRef);
              if (cls == NULL) {
                  (*env)->DeleteLocalRef(env, clsLocalRef);
                  return getPendingExceptionAndClear(env);
              }
              (*env)->DeleteLocalRef(env, clsLocalRef);
              insertEntryIntoTable(className, cls);
              *out = cls;
              return NULL;
          

          Because globalClassReference drops the lock after searchEntryFromTable, two threads could both get to POINT 1 at the same time, and end up creating two global class references to the same class. Then the insert would fail for one and we'd have a memory leak.

          It's better just to have the globalClassReference function hold the hash table lock the whole way through, and atomically search + insert if not present. We also don't need the silly LOCK_HASH_TABLE macros (a macro for one ordinary function call?), or really any of the hash table wrapper functions. Check out the HADOOP-10388 branch, this is fixed there.

          Again, I realize you didn't break this, but while we're in this code, let's fix it

              char *hadoopClassPathVMArg = "-Djava.class.path=";
          

          If this doesn't change it should be const char * const.

              hdfsBuilderSetNameNodePort(bld, (tPort)port);
          

          Can we avoid this typecast by making port be a variable of type tPort? Same comment in hdfsSingleNameNodeConnect.

          thanks Chris

          Show
          Colin Patrick McCabe added a comment - Thanks for looking at this, Chris. Looks pretty good. /* Use gcc type-checked format arguments. This is not supported on Windows. */ #ifdef _WIN32 #define TYPE_CHECKED_PRINTF_FORMAT(formatArg, varArgs) # else #define TYPE_CHECKED_PRINTF_FORMAT(formatArg, varArgs) \ __attribute__((format(printf, formatArg, varArgs))) #endif Let's put this in platform.h rather than exception.h . It will be useful in a lot of different spots. const tTime NO_CHANGE = -1; Let's make this static while we're moving it. const char *cPathName; const char * cUserName; const char * cGroupName; nit: stars should be next to the variable name jni_helper.c : This hash table stuff is not quite correct. For instance, here: static int hashTableInit(void) { if (!gClassRefHTable) { LOCK_HASH_TABLE(); You're checking gClassRefHTable without the lock, so there's no guarantee that another thread's changes will be visible to you. The code here needs some fixing (and needed some fixing even before your patch-- this isn't something you introduced.) You can see that there's a potential double insert issue here: jthrowable globalClassReference( const char *className, JNIEnv *env, jclass *out) { jclass clsLocalRef; jclass cls = searchEntryFromTable(className); if (cls) { *out = cls; return NULL; } <===== POINT 1 clsLocalRef = (*env)->FindClass(env,className); if (clsLocalRef == NULL) { return getPendingExceptionAndClear(env); } cls = (*env)->NewGlobalRef(env, clsLocalRef); if (cls == NULL) { (*env)->DeleteLocalRef(env, clsLocalRef); return getPendingExceptionAndClear(env); } (*env)->DeleteLocalRef(env, clsLocalRef); insertEntryIntoTable(className, cls); *out = cls; return NULL; Because globalClassReference drops the lock after searchEntryFromTable , two threads could both get to POINT 1 at the same time, and end up creating two global class references to the same class. Then the insert would fail for one and we'd have a memory leak. It's better just to have the globalClassReference function hold the hash table lock the whole way through, and atomically search + insert if not present. We also don't need the silly LOCK_HASH_TABLE macros (a macro for one ordinary function call?), or really any of the hash table wrapper functions. Check out the HADOOP-10388 branch, this is fixed there. Again, I realize you didn't break this, but while we're in this code, let's fix it char *hadoopClassPathVMArg = "-Djava.class.path=" ; If this doesn't change it should be const char * const . hdfsBuilderSetNameNodePort(bld, (tPort)port); Can we avoid this typecast by making port be a variable of type tPort ? Same comment in hdfsSingleNameNodeConnect . thanks Chris
          Hide
          Stephen Bovy added a comment -

          Gentlemen,

          One thing to keep in mind, is that the thread-support should not be hard-coded. It should be "optional"

          Why waste recourses and cpu cycles for a feature that may seldom be required ?

          If threads are not required many optimizations are possible including no-need to "lock" the hash-table.

          Show
          Stephen Bovy added a comment - Gentlemen, One thing to keep in mind, is that the thread-support should not be hard-coded. It should be "optional" Why waste recourses and cpu cycles for a feature that may seldom be required ? If threads are not required many optimizations are possible including no-need to "lock" the hash-table.
          Hide
          Colin Patrick McCabe added a comment -

          One thing to keep in mind, is that the thread-support should not be hard-coded. It should be "optional"

          Stephen, libhdfs relies on libjvm.so, which has libpthread.so as a hard dependency. There isn't any way to use it without thread support.

          This is even more clear when you consider what libhdfs is doing... starting a JVM and communicating with it. That JVM is going to use a bunch of threads since that's how the HDFS client works.

          Show
          Colin Patrick McCabe added a comment - One thing to keep in mind, is that the thread-support should not be hard-coded. It should be "optional" Stephen, libhdfs relies on libjvm.so , which has libpthread.so as a hard dependency. There isn't any way to use it without thread support. This is even more clear when you consider what libhdfs is doing... starting a JVM and communicating with it. That JVM is going to use a bunch of threads since that's how the HDFS client works.
          Hide
          Stephen Bovy added a comment -

          Thanks,

          I am probably exposing my ignorance, so please forgive me. Are you saying that using JNI automatically implies and requires thread support, and that every JNI call is running on a thread?

          My hdfs client does not use threads, so each hdfs call is synchronous, and each jni call is also synchronous, and within the
          context the code accessing the hash table should also be synchronous. Please correct me gently if I am wrong

          Show
          Stephen Bovy added a comment - Thanks, I am probably exposing my ignorance, so please forgive me. Are you saying that using JNI automatically implies and requires thread support, and that every JNI call is running on a thread? My hdfs client does not use threads, so each hdfs call is synchronous, and each jni call is also synchronous, and within the context the code accessing the hash table should also be synchronous. Please correct me gently if I am wrong
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12658964/HDFS-573.1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.TestLeaseRecovery2

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7514//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7514//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12658964/HDFS-573.1.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.TestLeaseRecovery2 +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7514//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7514//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          I think there are 2 aspects to the question:

          1. libhdfs embeds a JVM. The JVM itself always runs multiple internal threads, even if your libhdfs application code doesn't run multiple threads. This means that by extension, a libhdfs application is always multi-threaded, even if the application's code is entirely single-threaded/synchronous. This rules out things like linking to a single-threaded C runtime library for a supposed performance boost with single-core execution. A libhdfs application must always link to a C runtime library with multi-threading support.
          2. As far as the data structures inside the libhdfs code itself, you're correct that there is no thread safety concern if the application runs entirely single-threaded and makes synchronous calls. Technically, we don't need a lock around the hash table in that case. However, it might just cause end user confusion if we publish thread-safe vs. non-thread-safe builds or some kind of configuration flag to skip the locking. The effects of running multiple threads without the locking would be catastrophic, probably a crash of some sort. I haven't personally seen contention on this lock cause a real-world performance bottleneck, so I wonder if such an optimization is necessary.

          For the scope of this patch, I'd prefer to focus on a straight-up port of the existing code to work on Windows. We're taking a big step here, moving from not even compiling on Windows to fully functional, and the patch is already pretty large. Potential performance enhancements certainly are welcome in separate patches.

          FWIW, I think libhdfs has a weakness in that it has no clear-cut "initialize" function for the application to call during a single-threaded bootstrap sequence. This would have given us an easy place to start the JavaVM and pre-populate the mapping of class names to class references. Unfortunately, it would be backwards-incompatible to add that function now and demand existing applications change their code to call our initialize function. Instead, we have no choice but to do lazy initialization, and that drives a lot of the complexity in libhdfs with the mutexes and the thread-local storage. From my very quick scan of the HADOOP-10388 branch, it looks like we'll be providing a clearer initialization sequence there. libhdfs likely will need to remain this way though.

          Show
          Chris Nauroth added a comment - I think there are 2 aspects to the question: libhdfs embeds a JVM. The JVM itself always runs multiple internal threads, even if your libhdfs application code doesn't run multiple threads. This means that by extension, a libhdfs application is always multi-threaded, even if the application's code is entirely single-threaded/synchronous. This rules out things like linking to a single-threaded C runtime library for a supposed performance boost with single-core execution. A libhdfs application must always link to a C runtime library with multi-threading support. As far as the data structures inside the libhdfs code itself, you're correct that there is no thread safety concern if the application runs entirely single-threaded and makes synchronous calls. Technically, we don't need a lock around the hash table in that case. However, it might just cause end user confusion if we publish thread-safe vs. non-thread-safe builds or some kind of configuration flag to skip the locking. The effects of running multiple threads without the locking would be catastrophic, probably a crash of some sort. I haven't personally seen contention on this lock cause a real-world performance bottleneck, so I wonder if such an optimization is necessary. For the scope of this patch, I'd prefer to focus on a straight-up port of the existing code to work on Windows. We're taking a big step here, moving from not even compiling on Windows to fully functional, and the patch is already pretty large. Potential performance enhancements certainly are welcome in separate patches. FWIW, I think libhdfs has a weakness in that it has no clear-cut "initialize" function for the application to call during a single-threaded bootstrap sequence. This would have given us an easy place to start the JavaVM and pre-populate the mapping of class names to class references. Unfortunately, it would be backwards-incompatible to add that function now and demand existing applications change their code to call our initialize function. Instead, we have no choice but to do lazy initialization, and that drives a lot of the complexity in libhdfs with the mutexes and the thread-local storage. From my very quick scan of the HADOOP-10388 branch, it looks like we'll be providing a clearer initialization sequence there. libhdfs likely will need to remain this way though.
          Hide
          Colin Patrick McCabe added a comment -

          I am probably exposing my ignorance, so please forgive me. Are you saying that using JNI automatically implies and requires thread support

          Yep, that's what I'm saying.

          and that every JNI call is running on a thread?

          Not every JNI call runs in a different thread, but many HDFS JNI calls certainly do. For example, hdfsWrite uses DFSOutputStream, which ends up starting a thread to write to the pipeline.

          From my very quick scan of the HADOOP-10388 branch, it looks like we'll be providing a clearer initialization sequence there. libhdfs likely will need to remain this way though.

          I agree 100% that libhdfs should have had an "init" function that created some kind of context we could pass around. But... we're going to try to keep the existing API in HADOOP-10388. Sorry, it's just really nice to keep compatibility where you can.

          Show
          Colin Patrick McCabe added a comment - I am probably exposing my ignorance, so please forgive me. Are you saying that using JNI automatically implies and requires thread support Yep, that's what I'm saying. and that every JNI call is running on a thread? Not every JNI call runs in a different thread, but many HDFS JNI calls certainly do. For example, hdfsWrite uses DFSOutputStream , which ends up starting a thread to write to the pipeline. From my very quick scan of the HADOOP-10388 branch, it looks like we'll be providing a clearer initialization sequence there. libhdfs likely will need to remain this way though. I agree 100% that libhdfs should have had an "init" function that created some kind of context we could pass around. But... we're going to try to keep the existing API in HADOOP-10388 . Sorry, it's just really nice to keep compatibility where you can.
          Hide
          Stephen Bovy added a comment -

          Thanks Chris,

          We have had some offline discussions before. Thanks for the explanation.

          I have indeed added many enhancements. I would need to get management permission to share these (sigh)

          I have added optional support for dynamically loading the JVM. This simplifies build issues, and solves a lot of configuration
          usage issues.

          I have indeed added an optional lib-init function and have also added support for using a global static for the JVM pointer.

          I have added support for a thread-flag, which can be statically set by the compiler or dynamically set in the lib-init.

          When the thread flag is not set I use a static global to save the thread-env pointer which gets created when the jvm is
          created, and I only need to utilize and access that one-pointer in one-place.

          When the thread flag is not set, all the special thread code is bypassed with IF statements

          I have tested this in thread-mode with the thread-tester, and of course I am using it with my app in non thread mode .

          Works great either way.

          Show
          Stephen Bovy added a comment - Thanks Chris, We have had some offline discussions before. Thanks for the explanation. I have indeed added many enhancements. I would need to get management permission to share these (sigh) I have added optional support for dynamically loading the JVM. This simplifies build issues, and solves a lot of configuration usage issues. I have indeed added an optional lib-init function and have also added support for using a global static for the JVM pointer. I have added support for a thread-flag, which can be statically set by the compiler or dynamically set in the lib-init. When the thread flag is not set I use a static global to save the thread-env pointer which gets created when the jvm is created, and I only need to utilize and access that one-pointer in one-place. When the thread flag is not set, all the special thread code is bypassed with IF statements I have tested this in thread-mode with the thread-tester, and of course I am using it with my app in non thread mode . Works great either way.
          Hide
          Stephen Bovy added a comment -

          SAMPLE "Optional" INIT-LIB function

          // FLAG :: init-lib invoked (speed up jvm-init and avoid locks)
          extern short hdfs_JniInitLib;

          extern char hdfs_HadoopHome [2000];
          extern char hdfs_JavaHome [2000];

          // the following are used for no-threads support
          // use this flag to bypass thread logic
          // enable non-threaded speed-ups
          extern short hdfs_Threads;

          // Init the HDFS library
          int hdfsJNILibInit ( pHdfsInitParms parms )
          {

          JNIEnv* env;

          // disable thread support for now.
          hdfs_Threads = 0;

          if ( parms ) {

          if ( parms->JavaHome )
          {
          if ( strlen(parms->JavaHome) > 2000 )

          { fprintf ( stderr, "The JAVA_HOME variable is too long.\n" ); return 1; }

          strcpy ( hdfs_JavaHome, parms->JavaHome );
          }

          if ( parms->HadoopHome )
          {
          if ( strlen(parms->HadoopHome) > 2000 )

          { fprintf ( stderr, "The HADOOP_HOME variable is too long.\n" ); return 1; }

          strcpy ( hdfs_HadoopHome, parms->HadoopHome );
          }

          if (parms->threads)
          hdfs_Threads = parms->threads;

          }

          env = getJNIEnv();
          if (!env) return 1;

          hdfs_JniInitLib = 1;

          return 0;

          }

          Show
          Stephen Bovy added a comment - SAMPLE "Optional" INIT-LIB function // FLAG :: init-lib invoked (speed up jvm-init and avoid locks) extern short hdfs_JniInitLib; extern char hdfs_HadoopHome [2000] ; extern char hdfs_JavaHome [2000] ; // the following are used for no-threads support // use this flag to bypass thread logic // enable non-threaded speed-ups extern short hdfs_Threads; // Init the HDFS library int hdfsJNILibInit ( pHdfsInitParms parms ) { JNIEnv* env; // disable thread support for now. hdfs_Threads = 0; if ( parms ) { if ( parms->JavaHome ) { if ( strlen(parms->JavaHome) > 2000 ) { fprintf ( stderr, "The JAVA_HOME variable is too long.\n" ); return 1; } strcpy ( hdfs_JavaHome, parms->JavaHome ); } if ( parms->HadoopHome ) { if ( strlen(parms->HadoopHome) > 2000 ) { fprintf ( stderr, "The HADOOP_HOME variable is too long.\n" ); return 1; } strcpy ( hdfs_HadoopHome, parms->HadoopHome ); } if (parms->threads) hdfs_Threads = parms->threads; } env = getJNIEnv(); if (!env) return 1; hdfs_JniInitLib = 1; return 0; }
          Hide
          Chris Nauroth added a comment -

          I guess I misread something on HADOOP-10388. I thought I saw a nice clean init function in the jni_helper.c over there. I may have incorrectly assumed that this cascaded all the way out to the client-facing API.

          Thanks for sharing your experiences, Stephen. Unfortunately, I think we'd have a hard time incorporating those changes right now, given the compatibility concerns.

          I suppose backwards-incompatible changes like this could be considered on the 3.x release boundary.

          BTW Colin, thanks for the code review. The work so far has been aimed at a straight port, warts and all, but I'm happy to roll in a few more small fixes for existing problems while I'm in here. I'll work on a v2 of the patch.

          I have just one question though. My initial inclination was to put TYPE_CHECKED_PRINTF_FORMAT in platform.h as well. However, I then backed that out and put the ifdef in exception.h, because it has never been clear to me if exception.h is part of the public API. Most of the functions can't reasonably be considered public, because of the dependence on passing a JNIEnv. However, then there is getExceptionInfo. As long as we agree that only hdfs.h is the public API, and not exception.h, then I'll move TYPE_CHECKED_PRINTF_FORMAT back to platform.h. If client applications ever #include <exception.h>, then they'd also have the complexity of selecting the correct platform.h, which would be undesirable.

          Show
          Chris Nauroth added a comment - I guess I misread something on HADOOP-10388 . I thought I saw a nice clean init function in the jni_helper.c over there. I may have incorrectly assumed that this cascaded all the way out to the client-facing API. Thanks for sharing your experiences, Stephen. Unfortunately, I think we'd have a hard time incorporating those changes right now, given the compatibility concerns. I suppose backwards-incompatible changes like this could be considered on the 3.x release boundary. BTW Colin, thanks for the code review. The work so far has been aimed at a straight port, warts and all, but I'm happy to roll in a few more small fixes for existing problems while I'm in here. I'll work on a v2 of the patch. I have just one question though. My initial inclination was to put TYPE_CHECKED_PRINTF_FORMAT in platform.h as well. However, I then backed that out and put the ifdef in exception.h, because it has never been clear to me if exception.h is part of the public API. Most of the functions can't reasonably be considered public, because of the dependence on passing a JNIEnv . However, then there is getExceptionInfo . As long as we agree that only hdfs.h is the public API, and not exception.h, then I'll move TYPE_CHECKED_PRINTF_FORMAT back to platform.h. If client applications ever #include <exception.h> , then they'd also have the complexity of selecting the correct platform.h, which would be undesirable.
          Hide
          Stephen Bovy added a comment -

          Thanks

          I am an old-fashioned IBM main-framer, All my changes are backwards compatible

          Here is a slice for setting up dynamic load of the JVM

          // begin JVM function set-up
          // new jvm function declarations
          typedef jint (FGetVMS) ( JavaVM, const jsize, jint );
          typedef jint (FCreateVM) ( JavaVM, void, JavaVMInitArgs );
          #ifdef LOADJVM
          // dynamically loaded
          static FGetVMS hdfs_fpGetVM = NULL;
          static FCreateVM hdfs_fpCreateVM = NULL;
          #else
          // implicitly linked and auto-loaded (original default code)
          static FGetVMS hdfs_fpGetVM = JNI_GetCreatedJavaVMs;
          static FCreateVM hdfs_fpCreateVM = JNI_CreateJavaVM;
          #endif

          Show
          Stephen Bovy added a comment - Thanks I am an old-fashioned IBM main-framer, All my changes are backwards compatible Here is a slice for setting up dynamic load of the JVM // begin JVM function set-up // new jvm function declarations typedef jint ( FGetVMS) ( JavaVM , const jsize, jint ); typedef jint ( FCreateVM) ( JavaVM , void , JavaVMInitArgs ); #ifdef LOADJVM // dynamically loaded static FGetVMS hdfs_fpGetVM = NULL; static FCreateVM hdfs_fpCreateVM = NULL; #else // implicitly linked and auto-loaded (original default code) static FGetVMS hdfs_fpGetVM = JNI_GetCreatedJavaVMs; static FCreateVM hdfs_fpCreateVM = JNI_CreateJavaVM; #endif
          Hide
          Colin Patrick McCabe added a comment -

          I have just one question though. My initial inclination was to put TYPE_CHECKED_PRINTF_FORMAT in platform.h as well. However, I then backed that out and put the ifdef in exception.h, because it has never been clear to me if exception.h is part of the public API

          The only header file that's part of the public API is hdfs.h. That's the only one we export to end-users... nobody can even get access to the other ones without a Hadoop source tree. You should feel free to change, add, or remove things from any header file without worrying about compatibility, as long as that header is not hdfs.h.

          BTW Colin, thanks for the code review. The work so far has been aimed at a straight port, warts and all, but I'm happy to roll in a few more small fixes for existing problems while I'm in here. I'll work on a v2 of the patch.

          Thanks, Chris. I think what you've got looks pretty good... I wish all libhdfs patches could be this good

          Show
          Colin Patrick McCabe added a comment - I have just one question though. My initial inclination was to put TYPE_CHECKED_PRINTF_FORMAT in platform.h as well. However, I then backed that out and put the ifdef in exception.h, because it has never been clear to me if exception.h is part of the public API The only header file that's part of the public API is hdfs.h . That's the only one we export to end-users... nobody can even get access to the other ones without a Hadoop source tree. You should feel free to change, add, or remove things from any header file without worrying about compatibility, as long as that header is not hdfs.h . BTW Colin, thanks for the code review. The work so far has been aimed at a straight port, warts and all, but I'm happy to roll in a few more small fixes for existing problems while I'm in here. I'll work on a v2 of the patch. Thanks, Chris. I think what you've got looks pretty good... I wish all libhdfs patches could be this good
          Hide
          Colin Patrick McCabe added a comment -

          Stephen Bovy: you'll be happy to hear that we dynamically load libjvm.so in the HADOOP-10388 branch. The main reason for doing it there is because that branch add a pure native client which doesn't require libjvm.so, in addition to the existing JNI client.

          Show
          Colin Patrick McCabe added a comment - Stephen Bovy : you'll be happy to hear that we dynamically load libjvm.so in the HADOOP-10388 branch. The main reason for doing it there is because that branch add a pure native client which doesn't require libjvm.so , in addition to the existing JNI client.
          Hide
          Stephen Bovy added a comment -

          Thanks Colin,

          That is good news. The new pure native client looks very interesting. Our app context is "c" although some parts of our app are also c++, so
          we would need "c" binder/wrappers if this was written in c++.

          On the topic of "threads" here is my thinking:

          The lib-hdfs is like a skin-graft (or virus). It must adapt to the context of its "host" without harming the host. If the "host" context is a "thread", then of course lib-hdfs must survive in that context. but if the "host" context, is not a thread, it should thrive with the host without unnecessary thread-logic over-head.

          If the "Java code" uses threads "under-the-covers" that is a black-box non-issue as far as the host is concerned and should not germane to this discussion.

          I have added some backwards compatible code based on a flag to bypass the thread-logic. The flag-default=threaded, that's why it is backwards
          compatible. I have regression tested this with the host-thread-test app. And it is working in my non-threaded app without any complaints.

          I have also added an optional backwards-compatible lib-init function that enables the usage of a static-global for the JVM pointer. This of course is
          another "optimization"

          I have also added a NEW function to hdfs.c to support "globs"

          I would be happy to share some code-slices ( as food for thought ) thanks

          Show
          Stephen Bovy added a comment - Thanks Colin, That is good news. The new pure native client looks very interesting. Our app context is "c" although some parts of our app are also c++, so we would need "c" binder/wrappers if this was written in c++. On the topic of "threads" here is my thinking: The lib-hdfs is like a skin-graft (or virus). It must adapt to the context of its "host" without harming the host. If the "host" context is a "thread", then of course lib-hdfs must survive in that context. but if the "host" context, is not a thread, it should thrive with the host without unnecessary thread-logic over-head. If the "Java code" uses threads "under-the-covers" that is a black-box non-issue as far as the host is concerned and should not germane to this discussion. I have added some backwards compatible code based on a flag to bypass the thread-logic. The flag-default=threaded, that's why it is backwards compatible. I have regression tested this with the host-thread-test app. And it is working in my non-threaded app without any complaints. I have also added an optional backwards-compatible lib-init function that enables the usage of a static-global for the JVM pointer. This of course is another "optimization" I have also added a NEW function to hdfs.c to support "globs" I would be happy to share some code-slices ( as food for thought ) thanks
          Hide
          Stephen Bovy added a comment -

          Woops , let me add one more thought about the "thread" discussion.

          Some of these lib-hdfs functions are going to be in a read/write loop that is why it is so important ( for performance sake ) to make these
          function calls as totally efficient as is utterly possible. Performance is not a trivial issue.

          Show
          Stephen Bovy added a comment - Woops , let me add one more thought about the "thread" discussion. Some of these lib-hdfs functions are going to be in a read/write loop that is why it is so important ( for performance sake ) to make these function calls as totally efficient as is utterly possible. Performance is not a trivial issue.
          Hide
          Colin Patrick McCabe added a comment -

          Let's try to stay focused on porting libhdfs to windows... that's what this JIRA is about, after all

          Show
          Colin Patrick McCabe added a comment - Let's try to stay focused on porting libhdfs to windows... that's what this JIRA is about, after all
          Hide
          Stephen Bovy added a comment -

          Yes, thanks

          The scope of integrating lib-hdfs for our use went way beyond just windows. We could not use a library with obvious performance degradation
          that was built in for a thread-feature that was a NOP , the thread enhancements should have been optional .

          Show
          Stephen Bovy added a comment - Yes, thanks The scope of integrating lib-hdfs for our use went way beyond just windows. We could not use a library with obvious performance degradation that was built in for a thread-feature that was a NOP , the thread enhancements should have been optional .
          Hide
          Stephen Bovy added a comment -

          Thanks again for the opportunity to share experience and insights in the use of libhdfs.

          I know some of my comments are off-topic. Maybe we could open another Jira to discuss thread/performance issues.

          I would be happy to post some code slices to demonstrate and explain the issues.

          I did find a small bug in the unix thread code in "JNIEnv* getJNIEnv(void)"

          >>>>>>>

          tls->env = env;

          // note this was in the wrong location and has been moved
          #ifdef HAVE_BETTER_TLS
          quickTls = tls;
          return env;
          #endif
          // note this was in the wrong location and has been moved

          ret = pthread_setspecific ( hdfs_gTlsKey, tls );
          if (ret)

          { fprintf(stderr, "getJNIEnv: pthread_setspecific failed with " "error code %d\n", ret); hdfsThreadDestructor(tls); return NULL; }

          #endif // endif save unix thread local storage

          return env;

          }

          <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

          Show
          Stephen Bovy added a comment - Thanks again for the opportunity to share experience and insights in the use of libhdfs. I know some of my comments are off-topic. Maybe we could open another Jira to discuss thread/performance issues. I would be happy to post some code slices to demonstrate and explain the issues. I did find a small bug in the unix thread code in "JNIEnv* getJNIEnv(void)" >>>>>>> tls->env = env; // note this was in the wrong location and has been moved #ifdef HAVE_BETTER_TLS quickTls = tls; return env; #endif // note this was in the wrong location and has been moved ret = pthread_setspecific ( hdfs_gTlsKey, tls ); if (ret) { fprintf(stderr, "getJNIEnv: pthread_setspecific failed with " "error code %d\n", ret); hdfsThreadDestructor(tls); return NULL; } #endif // endif save unix thread local storage return env; } <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
          Hide
          Colin Patrick McCabe added a comment -

          Stephen, that code doesn't appear anywhere in the patch as far as I can tell. Am I missing something?

          Show
          Colin Patrick McCabe added a comment - Stephen, that code doesn't appear anywhere in the patch as far as I can tell. Am I missing something?
          Hide
          Chris Nauroth added a comment -

          We have an existing jira, HDFS-5644, which is relevant to these side discussions on lock contention and threading support.

          Show
          Chris Nauroth added a comment - We have an existing jira, HDFS-5644 , which is relevant to these side discussions on lock contention and threading support.
          Hide
          Stephen Bovy added a comment -

          IN the 2.2 GA Branch you can see the following

          JNIEnv* getJNIEnv(void)
          {

          ( other code ... )

          tls->env = env;
          ret = pthread_setspecific(gTlsKey, tls);
          if (ret)

          { fprintf(stderr, "getJNIEnv: pthread_setspecific failed with " "error code %d\n", ret); hdfsThreadDestructor(tls); return NULL; }

          #ifdef HAVE_BETTER_TLS
          quickTls = tls;
          #endif
          return env;
          }

          I think the above should be changed as follows ::

          tls->env = env;

          #ifdef HAVE_BETTER_TLS
          quickTls = tls;
          return env;
          #endif

          ret = pthread_setspecific(gTlsKey, tls);
          if (ret)

          { fprintf(stderr, "getJNIEnv: pthread_setspecific failed with " "error code %d\n", ret); hdfsThreadDestructor(tls); return NULL; }

          return env;
          }

          Maybe the above has already been fixed (or replaced?)

          Show
          Stephen Bovy added a comment - IN the 2.2 GA Branch you can see the following JNIEnv* getJNIEnv(void) { ( other code ... ) tls->env = env; ret = pthread_setspecific(gTlsKey, tls); if (ret) { fprintf(stderr, "getJNIEnv: pthread_setspecific failed with " "error code %d\n", ret); hdfsThreadDestructor(tls); return NULL; } #ifdef HAVE_BETTER_TLS quickTls = tls; #endif return env; } I think the above should be changed as follows :: tls->env = env; #ifdef HAVE_BETTER_TLS quickTls = tls; return env; #endif ret = pthread_setspecific(gTlsKey, tls); if (ret) { fprintf(stderr, "getJNIEnv: pthread_setspecific failed with " "error code %d\n", ret); hdfsThreadDestructor(tls); return NULL; } return env; } Maybe the above has already been fixed (or replaced?)
          Hide
          Colin Patrick McCabe added a comment -

          Chris Nauroth: Just a head's up... I'm going on vacation next week. I can try to review this today if you've got a new version ready... otherwise I'll review it when I get back a week after next Monday.

          Stephen Bovy: let's try to keep the comments relevant to this patch. The code that you're commenting about does not exist in trunk or in this patch, and it would have taken you only a second or two to verify this before posting. Thanks

          Show
          Colin Patrick McCabe added a comment - Chris Nauroth : Just a head's up... I'm going on vacation next week. I can try to review this today if you've got a new version ready... otherwise I'll review it when I get back a week after next Monday. Stephen Bovy : let's try to keep the comments relevant to this patch. The code that you're commenting about does not exist in trunk or in this patch, and it would have taken you only a second or two to verify this before posting. Thanks
          Hide
          Stephen Bovy added a comment -

          My apologies, I am not familiar with the source code system, nor do I have access to it.
          I downloaded the 2.2 GA snapshot, and I spoke only from that context.

          Show
          Stephen Bovy added a comment - My apologies, I am not familiar with the source code system, nor do I have access to it. I downloaded the 2.2 GA snapshot, and I spoke only from that context.
          Hide
          Colin Patrick McCabe added a comment -

          Stephen Bovy: It can be a little confusing to get started with a project like Hadoop. I can understand how you might not get some of our conventions right away. The great thing about open source is everyone has access to the code. Take a look at https://svn.apache.org/repos/asf/hadoop/common/trunk. Before you post a JIRA or comment, it's a good idea to check out whether your bug or idea has already been implemented or fixed in there.

          Btw, Chris, if you want to get someone else to review and commit this next week, that's fine with me too. I think it's pretty good barring the comments I already made... was just making the vacation comment to let you know why I won't be commenting on Monday... probably

          Show
          Colin Patrick McCabe added a comment - Stephen Bovy : It can be a little confusing to get started with a project like Hadoop. I can understand how you might not get some of our conventions right away. The great thing about open source is everyone has access to the code. Take a look at https://svn.apache.org/repos/asf/hadoop/common/trunk . Before you post a JIRA or comment, it's a good idea to check out whether your bug or idea has already been implemented or fixed in there. Btw, Chris, if you want to get someone else to review and commit this next week, that's fine with me too. I think it's pretty good barring the comments I already made... was just making the vacation comment to let you know why I won't be commenting on Monday... probably
          Hide
          Chris Nauroth added a comment -

          Thanks for the heads-up, Colin. Here is patch v2.

          • TYPE_CHECKED_PRINTF_FORMAT is in platform.h.
          • hash table locking is reworked.
          • minor changes for const-ness and putting the * with the variable name instead of the data type for pointers.

          Can we avoid this typecast by making port be a variable of type tPort?

          The challenge here is that nmdGetNameNodePort returns int, but subsequent code wants a tPort (a uint16_t), so a cast is unavoidable. I don't want to change the return type of nmdGetNameNodePort right now, because fuse-dfs calls it too, and I don't want to expand the scope of this patch into fuse-dfs code. I did however change the type of port and cast the return value immediately, which I think better documents intent.

          I wish all libhdfs patches could be this good.

          Thanks for the constructive feedback!

          Show
          Chris Nauroth added a comment - Thanks for the heads-up, Colin. Here is patch v2. TYPE_CHECKED_PRINTF_FORMAT is in platform.h. hash table locking is reworked. minor changes for const-ness and putting the * with the variable name instead of the data type for pointers. Can we avoid this typecast by making port be a variable of type tPort ? The challenge here is that nmdGetNameNodePort returns int , but subsequent code wants a tPort (a uint16_t ), so a cast is unavoidable. I don't want to change the return type of nmdGetNameNodePort right now, because fuse-dfs calls it too, and I don't want to expand the scope of this patch into fuse-dfs code. I did however change the type of port and cast the return value immediately, which I think better documents intent. I wish all libhdfs patches could be this good. Thanks for the constructive feedback!
          Chris Nauroth made changes -
          Attachment HDFS-573.2.patch [ 12659277 ]
          Hide
          Colin Patrick McCabe added a comment -

          Can get rid of hashTableInit.

          TPort explanation makes sense, thanks.

          +1 once hashTableInit comment is addressed

          Show
          Colin Patrick McCabe added a comment - Can get rid of hashTableInit. TPort explanation makes sense, thanks. +1 once hashTableInit comment is addressed
          Hide
          Chris Nauroth added a comment -

          Woops, thanks for the catch. Here is patch v3, dropping hashTableInit.

          Show
          Chris Nauroth added a comment - Woops, thanks for the catch. Here is patch v3, dropping hashTableInit .
          Chris Nauroth made changes -
          Attachment HDFS-573.3.patch [ 12659303 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12659277/HDFS-573.2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7534//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7534//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12659277/HDFS-573.2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7534//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7534//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12659303/HDFS-573.3.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7537//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12659303/HDFS-573.3.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7537//console This message is automatically generated.
          Chris Nauroth made changes -
          Attachment HDFS-573.4.patch [ 12659475 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12659475/HDFS-573.4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7539//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7539//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12659475/HDFS-573.4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7539//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7539//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12659475/HDFS-573.4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 7 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.http.TestHttpServerLifecycle

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7538//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7538//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12659475/HDFS-573.4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 7 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.http.TestHttpServerLifecycle +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7538//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7538//console This message is automatically generated.
          Hide
          Chris Nauroth added a comment -

          The test failures are unrelated. I'm planning to commit this on Monday, 8/4.

          Show
          Chris Nauroth added a comment - The test failures are unrelated. I'm planning to commit this on Monday, 8/4.
          Chris Nauroth made changes -
          Hadoop Flags Reviewed [ 10343 ]
          Hide
          Chris Nauroth added a comment -

          I've committed this to trunk and branch-2. Thank you, Colin, for the very helpful code review. Thank you, Stephen, for the discussion of other potential improvements.

          I also want to thank all of the original participants who got the ball rolling with this issue back in 2009. It feels really good to click the resolve button on a 5-year-old jira.

          Show
          Chris Nauroth added a comment - I've committed this to trunk and branch-2. Thank you, Colin, for the very helpful code review. Thank you, Stephen, for the discussion of other potential improvements. I also want to thank all of the original participants who got the ball rolling with this issue back in 2009. It feels really good to click the resolve button on a 5-year-old jira.
          Chris Nauroth made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Fix Version/s 3.0.0 [ 12320356 ]
          Fix Version/s 2.6.0 [ 12327181 ]
          Resolution Fixed [ 1 ]
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6038 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6038/)
          HDFS-573. Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814)

          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6038 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6038/ ) HDFS-573 . Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814 ) /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #639 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/639/)
          HDFS-573. Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814)

          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #639 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/639/ ) HDFS-573 . Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814 ) /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Hdfs-trunk #1832 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1832/)
          HDFS-573. Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814)

          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1832 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1832/ ) HDFS-573 . Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814 ) /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1858 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1858/)
          HDFS-573. Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814)

          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1858 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1858/ ) HDFS-573 . Porting libhdfs to Windows. Contributed by Chris Nauroth. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1616814 ) /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/mutexes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/inttypes.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/mutexes.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/unistd.h /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_read.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_write.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
          Allen Wittenauer made changes -
          Fix Version/s 3.0.0 [ 12320356 ]
          Chris Nauroth made changes -
          Link This issue is related to HDFS-6979 [ HDFS-6979 ]
          Chris Nauroth made changes -
          Release Note The libhdfs C API is now supported on Windows.
          Arun C Murthy made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          Chris Nauroth made changes -
          Link This issue is related to HDFS-7879 [ HDFS-7879 ]
          Chris Nauroth made changes -
          Link This issue breaks HDFS-8346 [ HDFS-8346 ]
          Hide
          Chris Nauroth added a comment -

          This patch accidentally introduced a build failure for the libwebhdfs contrib module. I have submitted a patch to fix it on HDFS-8346.

          Show
          Chris Nauroth added a comment - This patch accidentally introduced a build failure for the libwebhdfs contrib module. I have submitted a patch to fix it on HDFS-8346 .
          Transition Time In Source Status Execution Times Last Executer Last Execution Date
          Open Open Patch Available Patch Available
          1799d 22h 53m 1 Chris Nauroth 31/Jul/14 18:41
          Patch Available Patch Available Resolved Resolved
          7d 22h 50m 1 Chris Nauroth 08/Aug/14 17:31
          Resolved Resolved Closed Closed
          114d 10h 37m 1 Arun C Murthy 01/Dec/14 03:09

            People

            • Assignee:
              Chris Nauroth
              Reporter:
              Ziliang Guo
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Time Tracking

                Estimated:
                Original Estimate - 336h
                336h
                Remaining:
                Remaining Estimate - 336h
                336h
                Logged:
                Time Spent - Not Specified
                Not Specified

                  Development