Avro
  1. Avro
  2. AVRO-593

Java: add support for Hadoop's new mapreduce APIs

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 1.3.2, 1.3.3
    • Fix Version/s: 1.7.0
    • Component/s: java
    • Labels:
      None
    • Environment:

      Avro 1.3.3, Hadoop 0.20.2

    • Release Note:
      Add new mapreduce API bindings for hadoop.

      Description

      Avro should work with Hadoop's newer org.apache.hadoop.mapreduce API, in addition to the older org.apache.hadoop.mapred API.

      1. AVRO-593-test.tgz
        0.4 kB
        Doug Cutting
      2. AVRO-593.patch
        256 kB
        Doug Cutting
      3. AVRO-593.patch
        262 kB
        Doug Cutting
      4. AVRO-593.patch
        259 kB
        Doug Cutting
      5. AVRO-593.patch
        31 kB
        Garrett Wu

        Issue Links

          Activity

          Hide
          matt mead added a comment -

          Quick question... it appears that this integration against mapreduce API only supports deflate compression – is that right?

          Thanks for getting this in.

          Show
          matt mead added a comment - Quick question... it appears that this integration against mapreduce API only supports deflate compression – is that right? Thanks for getting this in.
          Hide
          Garrett Wu added a comment -

          Thank you, Doug, for doing the hard part – integration!

          Show
          Garrett Wu added a comment - Thank you, Doug, for doing the hard part – integration!
          Hide
          Doug Cutting added a comment -

          I committed this. Thanks, Garrett!

          Show
          Doug Cutting added a comment - I committed this. Thanks, Garrett!
          Hide
          Scott Carey added a comment -

          No objections, but I have not had time for a deep review and won't for more than a week. I don't think we need to hold this up for my full review, I can always create another ticket for later changes.

          Show
          Scott Carey added a comment - No objections, but I have not had time for a deep review and won't for more than a week. I don't think we need to hold this up for my full review, I can always create another ticket for later changes.
          Hide
          Doug Cutting added a comment -

          I looked at this again today.

          AvroKeyValue is similar to Pair but is implemented quite differently. Rather than itself having 'key' and 'value' fields it wraps a GenericData.Record that has those two fields. This is exposed in its APIs. Converting its uses to Pair would thus be a major undertaking. Rather I think we might just tolerate these two similar classes in the project.

          I'm also no longer convinced that it's worth trying to move SortedKeyValueFile into Avro's core. The reader & writer constructors are Hadoop Path-based and changing this would require inventing a new abstract file interface, since the implementation manipulates the file names.

          So I just implemented the one other change contemplated in my previous comment (replacing SeekableHadoopInput with the existing FsInput). Here's a new patch with that.

          Does anyone object to committing this?

          Show
          Doug Cutting added a comment - I looked at this again today. AvroKeyValue is similar to Pair but is implemented quite differently. Rather than itself having 'key' and 'value' fields it wraps a GenericData.Record that has those two fields. This is exposed in its APIs. Converting its uses to Pair would thus be a major undertaking. Rather I think we might just tolerate these two similar classes in the project. I'm also no longer convinced that it's worth trying to move SortedKeyValueFile into Avro's core. The reader & writer constructors are Hadoop Path-based and changing this would require inventing a new abstract file interface, since the implementation manipulates the file names. So I just implemented the one other change contemplated in my previous comment (replacing SeekableHadoopInput with the existing FsInput). Here's a new patch with that. Does anyone object to committing this?
          Hide
          Doug Cutting added a comment -

          > Ideally, anything in the .io, .util, and .file packages does not reference the .mapred or .mapreduce packages [ ... ]

          Much in these packages references AvroKey and AvroValue and/or AvroJob. These uses aren't mapreduce-specific and could be refactored away, e.g., by moving AvroKey and AvroValue from o.a.a.mapred to o.a.a.hadoop.io, but that would be incompatible.

          SortedKeyValueFile is the Avro equivalent of Hadoop's MapFile. Arguably it should be moved into o.a.a.io. It depends on AvroKeyValue, which might also be moved to the core. AvroKeyValue is very similar in functionality to o.a.a.mapred.Pair. Perhaps SortedKeyValueFile should be switched to use Pair and both moved to the core.

          I have implemented a SequenceFile shim and it works. There's now just a tiny class that needs to be in o.a.h.io, a base class that exposes two package-private nested classes from within SequenceFile.

          I've re-arranged the classes per Scott's #4 variant but can revert that. We need to decide how much refactoring we want to do here.

          Finally, I note that io.SeekableHadoopInput replicates functionality that's already in mapred.FsInput, so we should replace the former with the latter in the new code.

          Show
          Doug Cutting added a comment - > Ideally, anything in the .io, .util, and .file packages does not reference the .mapred or .mapreduce packages [ ... ] Much in these packages references AvroKey and AvroValue and/or AvroJob. These uses aren't mapreduce-specific and could be refactored away, e.g., by moving AvroKey and AvroValue from o.a.a.mapred to o.a.a.hadoop.io, but that would be incompatible. SortedKeyValueFile is the Avro equivalent of Hadoop's MapFile. Arguably it should be moved into o.a.a.io. It depends on AvroKeyValue, which might also be moved to the core. AvroKeyValue is very similar in functionality to o.a.a.mapred.Pair. Perhaps SortedKeyValueFile should be switched to use Pair and both moved to the core. I have implemented a SequenceFile shim and it works. There's now just a tiny class that needs to be in o.a.h.io, a base class that exposes two package-private nested classes from within SequenceFile. I've re-arranged the classes per Scott's #4 variant but can revert that. We need to decide how much refactoring we want to do here. Finally, I note that io.SeekableHadoopInput replicates functionality that's already in mapred.FsInput, so we should replace the former with the latter in the new code.
          Hide
          Scott Carey added a comment -

          I discussed that above. We could move it, but we'd still need a shim in o.a.h.io, since the subclass accesses package-private bits.

          Let me clarify: Is it possible for AvroSequenceFile to not reference anything in o.a.hadoop.** or o.a.a.

          {mapreduce, hadoop, mapred}

          .** ?

          It has:

          import org.apache.avro.mapred.AvroKey;
          import org.apache.avro.mapred.AvroValue;
          

          which would indicate to me that it must be in o.a.a.mapred.

          If it is in o.a.a it must not reference any classes that don't exist in the base avro module that encompases o.a.a (lang/java/avro)

          Ideally, anything in the .io, .util, and .file packages does not reference the .mapred or .mapreduce packages, so that this can be packaged as a standalone hadoop dependency down the road. I have not looked at all those yet to see what the package dependencies are.

          Show
          Scott Carey added a comment - I discussed that above. We could move it, but we'd still need a shim in o.a.h.io, since the subclass accesses package-private bits. Let me clarify: Is it possible for AvroSequenceFile to not reference anything in o.a.hadoop.** or o.a.a. {mapreduce, hadoop, mapred} .** ? It has: import org.apache.avro.mapred.AvroKey; import org.apache.avro.mapred.AvroValue; which would indicate to me that it must be in o.a.a.mapred. If it is in o.a.a it must not reference any classes that don't exist in the base avro module that encompases o.a.a (lang/java/avro) Ideally, anything in the .io, .util, and .file packages does not reference the .mapred or .mapreduce packages, so that this can be packaged as a standalone hadoop dependency down the road. I have not looked at all those yet to see what the package dependencies are.
          Hide
          Tom White added a comment -

          +1 to Scott's #4 variant.

          Show
          Tom White added a comment - +1 to Scott's #4 variant.
          Hide
          Doug Cutting added a comment -

          > Is it possible to move AvroSequenceFile under o.a.a ?

          I discussed that above. We could move it, but we'd still need a shim in o.a.h.io, since the subclass accesses package-private bits.

          > if we need to produce two otherwise identical modules in a build – one 0.23.x + compatible and one for the 0.20 / 0.22 / 1.0 users

          The nested Context classes in mapreduce's Mapper and Reducer went from abstract classes to interfaces (MAPREDUCE-954), requiring re-compilation of code that references these. But the mapreduce support added here does not reference these. So I think we're spared.

          Show
          Doug Cutting added a comment - > Is it possible to move AvroSequenceFile under o.a.a ? I discussed that above. We could move it, but we'd still need a shim in o.a.h.io, since the subclass accesses package-private bits. > if we need to produce two otherwise identical modules in a build – one 0.23.x + compatible and one for the 0.20 / 0.22 / 1.0 users The nested Context classes in mapreduce's Mapper and Reducer went from abstract classes to interfaces ( MAPREDUCE-954 ), requiring re-compilation of code that references these. But the mapreduce support added here does not reference these. So I think we're spared.
          Hide
          Scott Carey added a comment -

          We also need to consider if we need to produce two otherwise identical modules in a build – one 0.23.x + compatible and one for the 0.20 / 0.22 / 1.0 users. My understanding is that one needs to compile against 0.23.x to work properly there. Organizing the modules so that it is possible to produce an Avro release that supports multiple Hadoop variants would be useful.

          Show
          Scott Carey added a comment - We also need to consider if we need to produce two otherwise identical modules in a build – one 0.23.x + compatible and one for the 0.20 / 0.22 / 1.0 users. My understanding is that one needs to compile against 0.23.x to work properly there. Organizing the modules so that it is possible to produce an Avro release that supports multiple Hadoop variants would be useful.
          Hide
          Scott Carey added a comment -

          Re #1: its OK to have multiple packages in a single maven module, but not good to have a package split across modules as it causes problems for OSGi and in the future, Java 8 modules.

          Re #2: This is OK, but a little confusing. Also, if we ever wanted to break apart the mapred module in to two or three (e.g. avro-hadoop, avro-mapred, avro-mapreduce with the common stuff in avro-hadoop and the two APIs in the others) it will be less consistent.

          Re #3: This is fairly clean, but is incompatible.

          Re #4: This is decent, but I would propose: org.apache.avro.

          {hadoop,mapreduce,mapred,hadoop.io,hadoop.file,hadoop.util}

          . Then the current module would have o.a.a.

          {hadoop,mapreduce,mapred}

          and children packages. A future split could divide on these cleanly. One reason to split in the future is that some users may want hadoop stuff that is not related to mapreduce – sequence files, avro data file access via FileSytem+Path, etc. If we split the module, avoiding moving classes around is important.

          Is it possible to move AvroSequenceFile under o.a.a ? All classes in that package need to be in the base avro maven module, and cannot depend on any hadoop APIs.

          Show
          Scott Carey added a comment - Re #1: its OK to have multiple packages in a single maven module, but not good to have a package split across modules as it causes problems for OSGi and in the future, Java 8 modules. Re #2: This is OK, but a little confusing. Also, if we ever wanted to break apart the mapred module in to two or three (e.g. avro-hadoop, avro-mapred, avro-mapreduce with the common stuff in avro-hadoop and the two APIs in the others) it will be less consistent. Re #3: This is fairly clean, but is incompatible. Re #4: This is decent, but I would propose: org.apache.avro. {hadoop,mapreduce,mapred,hadoop.io,hadoop.file,hadoop.util} . Then the current module would have o.a.a. {hadoop,mapreduce,mapred} and children packages. A future split could divide on these cleanly. One reason to split in the future is that some users may want hadoop stuff that is not related to mapreduce – sequence files, avro data file access via FileSytem+Path, etc. If we split the module, avoiding moving classes around is important. Is it possible to move AvroSequenceFile under o.a.a ? All classes in that package need to be in the base avro maven module, and cannot depend on any hadoop APIs.
          Hide
          Doug Cutting added a comment -

          I see a few choices:

          1. org.apache.avro.

          {mapred,mapreduce,io,file,util}. This is what the code on github does. This would make the avro-mapred module contain things outside the org.apache.avro.mapred package, and splits Avro's io, file and util packages across multiple modules.

          2. org.apache.avro.mapred.{mapreduce,io,file,util}. This is what my patch does. This is back-compatible and consistent with the module name, but places mapreduce under mapred, which is different than the Hadoop layout.

          3. org.apache.avro.hadoop.{mapred,mapreduce,io,file,util}

          . We'd rename the module to be avro-hadoop. This would be incompatible but consistent with Hadoop. For back-compatibility we might leave the mapred classes in their current package.

          4. org.apache.avro.

          {mapred,mapreduce,mapred.io,mapred.file,mapred.util}

          . This is back-compatible but includes a package that's not under the package of the module name.

          Tom, are you advocating for (4)? I'd be okay with that, I guess.

          I'm also leaning towards moving AvroSequenceFile under org.apache.avro and adding just a shim base class into org.apache.hadoop.io that subclasses SequenceFile and makes public the bits we need. That way if we get Hadoop to expose these bits the Avro API would not change.

          Show
          Doug Cutting added a comment - I see a few choices: 1. org.apache.avro. {mapred,mapreduce,io,file,util}. This is what the code on github does. This would make the avro-mapred module contain things outside the org.apache.avro.mapred package, and splits Avro's io, file and util packages across multiple modules. 2. org.apache.avro.mapred.{mapreduce,io,file,util}. This is what my patch does. This is back-compatible and consistent with the module name, but places mapreduce under mapred, which is different than the Hadoop layout. 3. org.apache.avro.hadoop.{mapred,mapreduce,io,file,util} . We'd rename the module to be avro-hadoop. This would be incompatible but consistent with Hadoop. For back-compatibility we might leave the mapred classes in their current package. 4. org.apache.avro. {mapred,mapreduce,mapred.io,mapred.file,mapred.util} . This is back-compatible but includes a package that's not under the package of the module name. Tom, are you advocating for (4)? I'd be okay with that, I guess. I'm also leaning towards moving AvroSequenceFile under org.apache.avro and adding just a shim base class into org.apache.hadoop.io that subclasses SequenceFile and makes public the bits we need. That way if we get Hadoop to expose these bits the Avro API would not change.
          Hide
          Tom White added a comment -

          > I renamed all of the packages to reside under org.apache.avro.mapred. So that package now has subpackages named io, file, util and mapreduce.

          Keeping the package org.apache.avro.mapreduce would be more consistent with Hadoop, which has the mapred/mapreduce distinction.

          Show
          Tom White added a comment - > I renamed all of the packages to reside under org.apache.avro.mapred. So that package now has subpackages named io, file, util and mapreduce. Keeping the package org.apache.avro.mapreduce would be more consistent with Hadoop, which has the mapred/mapreduce distinction.
          Hide
          Doug Cutting added a comment -

          Garrett, this code looks great! Thanks for contributing it.

          I renamed all of the packages to reside under org.apache.avro.mapred. So that package now has subpackages named io, file, util and mapreduce. That's consistent with other Avro modules, where classes are under org.apache.avro.<module>.

          The only exception is org.apache.hadoop.io.AvroSequenceFile. This is in a Hadoop package so that it can access some package-private parts of SequenceFile. This is fragile, as SequenceFile could change these non-public APIs. We should probably file an issue with Hadoop to make these items protected so that SequenceFile can be subclassed in a supported way.

          I plan to improve the javadoc a bit (adding package.html files to new packages) and move versions for new dependencies from mapred/pom.xml into the parent pom. Then I think this should be ready to commit.

          Show
          Doug Cutting added a comment - Garrett, this code looks great! Thanks for contributing it. I renamed all of the packages to reside under org.apache.avro.mapred. So that package now has subpackages named io, file, util and mapreduce. That's consistent with other Avro modules, where classes are under org.apache.avro.<module>. The only exception is org.apache.hadoop.io.AvroSequenceFile. This is in a Hadoop package so that it can access some package-private parts of SequenceFile. This is fragile, as SequenceFile could change these non-public APIs. We should probably file an issue with Hadoop to make these items protected so that SequenceFile can be subclassed in a supported way. I plan to improve the javadoc a bit (adding package.html files to new packages) and move versions for new dependencies from mapred/pom.xml into the parent pom. Then I think this should be ready to commit.
          Hide
          Doug Cutting added a comment -

          Here's a first pass at renaming the packages. Tests pass. I'll take a closer look next week.

          Show
          Doug Cutting added a comment - Here's a first pass at renaming the packages. Tests pass. I'll take a closer look next week.
          Hide
          Garrett Wu added a comment -

          Thanks.

          Yes, I intend this code to be contributed to Apache Avro. When I get some free cycles, I'll upload a patch with the

          {io,file}

          packages renamed. But anyone else should feel free if they have time first.

          Show
          Garrett Wu added a comment - Thanks. Yes, I intend this code to be contributed to Apache Avro. When I get some free cycles, I'll upload a patch with the {io,file} packages renamed. But anyone else should feel free if they have time first.
          Hide
          Doug Cutting added a comment -

          Garrett, I just glanced at this and it looks great! You've factored things so that much of the code is shared between the 'mapred' and 'mapreduce' implementations.

          The stuff in the 'file' and 'io' packages should probably be renamed. Currently the 'io' and 'file' packages are in the main avro jar, which does not require Hadoop. I think it's best not to split packages across multiple jars and these classes depend on Hadoop so probably belong in the avro-mapred jar. Perhaps they should be renamed 'org.apache.avro.mapred.

          {io,file}

          '?

          Also, do you intend this code to be contributed to Apache Avro? (I ask as a legal formality.)

          Show
          Doug Cutting added a comment - Garrett, I just glanced at this and it looks great! You've factored things so that much of the code is shared between the 'mapred' and 'mapreduce' implementations. The stuff in the 'file' and 'io' packages should probably be renamed. Currently the 'io' and 'file' packages are in the main avro jar, which does not require Hadoop. I think it's best not to split packages across multiple jars and these classes depend on Hadoop so probably belong in the avro-mapred jar. Perhaps they should be renamed 'org.apache.avro.mapred. {io,file} '? Also, do you intend this code to be contributed to Apache Avro? (I ask as a legal formality.)
          Hide
          Garrett Wu added a comment -

          Coincidentally, we just announced the release of our avro mapreduce code today as well: https://github.com/wibidata/odiago-avro

          Show
          Garrett Wu added a comment - Coincidentally, we just announced the release of our avro mapreduce code today as well: https://github.com/wibidata/odiago-avro
          Hide
          Jeff Hammerbacher added a comment -

          There's some code at https://github.com/friso/avro-mapreduce for working with the new MapReduce API

          Show
          Jeff Hammerbacher added a comment - There's some code at https://github.com/friso/avro-mapreduce for working with the new MapReduce API
          Hide
          Tom White added a comment -

          Is anyone still working on this?

          Show
          Tom White added a comment - Is anyone still working on this?
          Hide
          Doug Cutting added a comment -

          I don't have a strong opinion about where the base classes should live. Perhaps they can just live in the mapred package?

          Thanks for working on this!

          Show
          Doug Cutting added a comment - I don't have a strong opinion about where the base classes should live. Perhaps they can just live in the mapred package? Thanks for working on this!
          Hide
          Jeremy Hinegardner added a comment -

          I can definitely see that, My goal is to be able to use the mapreduce api for avro alongside the current stable release of avro. If you have a suggestion of where these mapred/mapreduce base classes should live package structure wise (org.apache.avro.mapreduce.common?), I'll work on it, and rework this patch to apply to current trunk also.

          Show
          Jeremy Hinegardner added a comment - I can definitely see that, My goal is to be able to use the mapreduce api for avro alongside the current stable release of avro. If you have a suggestion of where these mapred/mapreduce base classes should live package structure wise (org.apache.avro.mapreduce.common?), I'll work on it, and rework this patch to apply to current trunk also.
          Hide
          Doug Cutting added a comment -

          I'd actually prefer it if the implementations shared more rather than less, so that fixes and improvements would not need to be made twice. For example, AVRO-669 made significant changes to the mapred code that would also be useful for the mapreduce version. So might be nice if both versions of AvroJob shared a common base class, with shared setters and getters, e.g., getInputKeyDatumReader(), etc. to minimize replication of logic.

          Show
          Doug Cutting added a comment - I'd actually prefer it if the implementations shared more rather than less, so that fixes and improvements would not need to be made twice. For example, AVRO-669 made significant changes to the mapred code that would also be useful for the mapreduce version. So might be nice if both versions of AvroJob shared a common base class, with shared setters and getters, e.g., getInputKeyDatumReader(), etc. to minimize replication of logic.
          Hide
          Jeremy Hinegardner added a comment -

          I have one small issue with this, mapred.AvroSerialization has new protected methods added to it so that mapreduce.AvroSerialization may inherit from it. This makes it a bit problematic to use this patch without fully patching the avro tree. If mapreduce.AvroSerialization was completely separated from mapred.AvroSerialization then this patch could be used alongside existing 1.4.1.

          I will attempt to rework this patch to do this, but it may not be until next week or so.

          Show
          Jeremy Hinegardner added a comment - I have one small issue with this, mapred.AvroSerialization has new protected methods added to it so that mapreduce.AvroSerialization may inherit from it. This makes it a bit problematic to use this patch without fully patching the avro tree. If mapreduce.AvroSerialization was completely separated from mapred.AvroSerialization then this patch could be used alongside existing 1.4.1. I will attempt to rework this patch to do this, but it may not be until next week or so.
          Hide
          Garrett Wu added a comment -

          Thanks for the info, Scott.

          Trying to avoid putting avro serialization 'inside' of Writables, I came up with this patch that tries to keep features/changes to a bare minimum. Let me know what you think.

          Show
          Garrett Wu added a comment - Thanks for the info, Scott. Trying to avoid putting avro serialization 'inside' of Writables, I came up with this patch that tries to keep features/changes to a bare minimum. Let me know what you think.
          Hide
          Scott Carey added a comment -

          FYI: Wrapping Avro serialization 'inside' of Writable will work, but there will be some non-trivial performance cost to that. Writable requires more fine-grained reads and writes from the underlying stream preventing optimal buffering for Avro.

          Show
          Scott Carey added a comment - FYI: Wrapping Avro serialization 'inside' of Writable will work, but there will be some non-trivial performance cost to that. Writable requires more fine-grained reads and writes from the underlying stream preventing optimal buffering for Avro.
          Hide
          Garrett Wu added a comment -

          I'm also interested in using the newer mapreduce API with Avro, so I'm trying to write an AvroWritable and some input and output format classes that know how to deal with the schemas. I should have a patch next week, but the idea is:

          • Introduce new classes AvroKey and AvroValue that implement Writable.
          • Users can call AvroJob.setInputKeySchema(), AvroJob.setInputValueSchema(), AvroJob.setMapOutputKeySchema(), AvroJob.setMapOutputValueSchema(), AvroJob.setReduceOutputKeySchema(), AvroJob.setReduceOutputValueSchema() as needed.
          • Provide AvroContainerFileInputFormat/AvroContainerFileOutputFormat, AvroSequenceFileInputFormat, AvroSequenceFileOutputFormat that read and write the schemas for the data appropriately. The schema in the sequence files can be stored in the header's metadata.
          • Users can write Mappers and Reducers as they normally would. Note that this differs slightly from the org.apache.avro.mapred.* way of doing things – I don't plan to supply special AvroMapper and AvroReducer base classes or a new Serialization, since the AvroKey/AvroValue classes are Writable just like any other hadoop key/value type.
          Show
          Garrett Wu added a comment - I'm also interested in using the newer mapreduce API with Avro, so I'm trying to write an AvroWritable and some input and output format classes that know how to deal with the schemas. I should have a patch next week, but the idea is: Introduce new classes AvroKey and AvroValue that implement Writable. Users can call AvroJob.setInputKeySchema(), AvroJob.setInputValueSchema(), AvroJob.setMapOutputKeySchema(), AvroJob.setMapOutputValueSchema(), AvroJob.setReduceOutputKeySchema(), AvroJob.setReduceOutputValueSchema() as needed. Provide AvroContainerFileInputFormat/AvroContainerFileOutputFormat, AvroSequenceFileInputFormat, AvroSequenceFileOutputFormat that read and write the schemas for the data appropriately. The schema in the sequence files can be stored in the header's metadata. Users can write Mappers and Reducers as they normally would. Note that this differs slightly from the org.apache.avro.mapred.* way of doing things – I don't plan to supply special AvroMapper and AvroReducer base classes or a new Serialization, since the AvroKey/AvroValue classes are Writable just like any other hadoop key/value type.
          Hide
          Scott Carey added a comment -

          Is there a specific use case where this is failing for you or is it just the use of deprecated APIs that is a problem?

          I suppose that integrating Avro with another library that is on the newer API could be an issue.

          Show
          Scott Carey added a comment - Is there a specific use case where this is failing for you or is it just the use of deprecated APIs that is a problem? I suppose that integrating Avro with another library that is on the newer API could be an issue.
          Hide
          Scott Carey added a comment -

          The old mapred API is being un-deprecated for 0.21 and is not going away soon. The new mapreduce API is not yet finished.

          However we will eventually need to support the newer API.

          Show
          Scott Carey added a comment - The old mapred API is being un-deprecated for 0.21 and is not going away soon. The new mapreduce API is not yet finished. However we will eventually need to support the newer API.

            People

            • Assignee:
              Garrett Wu
              Reporter:
              Steve Severance
            • Votes:
              8 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development