Index: design.html
===================================================================
--- design.html	(revision 1659006)
+++ design.html	(working copy)
@@ -97,7 +97,7 @@
 <p>
 Batching is one of the big drivers of efficiency, and to enable batching the Kafka producer has an asynchronous mode that accumulates data in memory and sends out larger batches in a single request. The batching can be configured to accumulate no more than a fixed number of messages and to wait no longer than some fixed latency bound (say 100 messages or 5 seconds). This allows the accumulation of more bytes to send, and few larger I/O operations on the servers. Since this buffering happens in the client it obviously reduces the durability as any data buffered in memory and not yet sent will be lost in the event of a producer crash.
 <p>
-Note that as of Kafka 0.8.1 the async producer does not have a callback, which could be used to register handlers to catch send errors.  Adding such callback functionality is proposed for Kafka 0.9, see <a href="https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ProposedProducerAPI">Proposed Producer API</a>.
+Note that kafka 0.8.2.0 releases new Java producer, which can be used to register callbacks/handlers to catch send errors.  New producer functionality proposal is here <a href="https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ProposedProducerAPI">Proposed Producer API</a>.
 
 <h3><a id="theconsumer">4.5 The Consumer</a></h3>
 
@@ -271,7 +271,7 @@
 <ol>
 <li><i>Database change subscription</i>. It is often necessary to have a data set in multiple data systems, and often one of these systems is a database of some kind (either a RDBMS or perhaps a new-fangled key-value store). For example you might have a database, a cache, a search cluster, and a Hadoop cluster. Each change to the database will need to be reflected in the cache, the search cluster, and eventually in Hadoop. In the case that one is only handling the real-time updates you only need recent log. But if you want to be able to reload the cache or restore a failed search node you may need a complete data set.
 <li><i>Event sourcing</i>. This is a style of application design which co-locates query processing with application design and uses a log of changes as the primary store for the application.
-<li><i>Journaling for high-availability</i>. A process that does local computation can be made fault-tolerant by logging out changes that it makes to it's local state so another process can reload these changes and carry on if it should fail. A concrete example of this is handling counts, aggregations, and other "group by"-like processing in a stream query system. Samza, a real-time stream-processing framework, <a href="http://samza.incubator.apache.org/learn/documentation/0.7.0/container/state-management.html">uses this feature</a> for exactly this purpose.
+<li><i>Journaling for high-availability</i>. A process that does local computation can be made fault-tolerant by logging out changes that it makes to it's local state so another process can reload these changes and carry on if it should fail. A concrete example of this is handling counts, aggregations, and other "group by"-like processing in a stream query system. Samza, a real-time stream-processing framework, <a href="http://samza.apache.org/learn/documentation/0.7.0/container/state-management.html">uses this feature</a> for exactly this purpose.
 </ol>
 In each of these cases one needs primarily to handle the real-time feed of changes, but occasionally, when a machine crashes or data needs to be re-loaded or re-processed, one needs to do a full load. Log compaction allows feeding both of these use cases off the same backing topic.
 
@@ -322,7 +322,7 @@
 <p>
 <h4>Configuring The Log Cleaner</h4>
 
-As of 0.8.1 the log cleaner is disabled by default. To enable it set the server config
+The log cleaner is disabled by default. To enable it set the server config
   <pre>  log.cleaner.enable=true</pre>
 This will start the pool of cleaner threads. To enable log cleaning on a particular topic you can add the log-specific property
   <pre>  log.cleanup.policy=compact</pre>
Index: ops.html
===================================================================
--- ops.html	(revision 1659006)
+++ ops.html	(working copy)
@@ -42,7 +42,8 @@
 <pre>
  &gt; bin/kafka-topics.sh --zookeeper zk_host:port/chroot --delete --topic my_topic_name
 </pre>
-WARNING: Delete topic functionality is beta in 0.8.1. Please report any bugs that you encounter on the <a href="mailto: users@kafka.apache.org">mailing list</a> or <a href="https://issues.apache.org/jira/browse/KAFKA">JIRA</a>.
+Topic deletion option is disabled by default. To enable it set the server config
+  <pre>delete.topic.enable=true</pre>
 <p>
 Kafka does not currently support reducing the number of partitions for a topic or changing the replication factor.
 
Index: quickstart.html
===================================================================
--- quickstart.html	(revision 1659006)
+++ quickstart.html	(working copy)
@@ -4,11 +4,11 @@
 
 <h4> Step 1: Download the code </h4>
 
-<a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz" title="Kafka downloads">Download</a> the 0.8.1.1 release and un-tar it.
+<a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz" title="Kafka downloads">Download</a> the 0.8.2.0 release and un-tar it.
 
 <pre>
-&gt; <b>tar -xzf kafka_2.9.2-0.8.1.1.tgz</b>
-&gt; <b>cd kafka_2.9.2-0.8.1.1</b>
+&gt; <b>tar -xzf kafka_2.10-0.8.2.0.tgz</b>
+&gt; <b>cd kafka_2.10-0.8.2.0</b>
 </pre>
 
 <h4>Step 2: Start the server</h4>
Index: upgrade.html
===================================================================
--- upgrade.html	(revision 1659006)
+++ upgrade.html	(working copy)
@@ -1,4 +1,9 @@
 <h3><a id="upgrade">1.5 Upgrading From Previous Versions</a></h3>
+
+<h4>Upgrading from 0.8.1 to 0.8.2.0</h4>
+
+0.8.2.0 is fully compatible with 0.8.1. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.
+
 <h4>Upgrading from 0.8.0 to 0.8.1</h4>
 
 0.8.1 is fully compatible with 0.8. The upgrade can be done one broker at a time by simply bringing it down, updating the code, and restarting it.
Index: uses.html
===================================================================
--- uses.html	(revision 1659006)
+++ uses.html	(working copy)
@@ -28,7 +28,7 @@
 
 <h4>Stream Processing</h4>
 
-Many users end up doing stage-wise processing of data where data is consumed from topics of raw data and then aggregated, enriched, or otherwise transformed into new Kafka topics for further consumption. For example a processing flow for article recommendation might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might help normalize or deduplicate this content to a topic of cleaned article content; a final stage might attempt to match this content to users. This creates a graph of real-time data flow out of the individual topics. <a href="https://github.com/nathanmarz/storm">Storm</a> and <a href="http://samza.incubator.apache.org/">Samza</a> are popular frameworks for implementing these kinds of transformations.
+Many users end up doing stage-wise processing of data where data is consumed from topics of raw data and then aggregated, enriched, or otherwise transformed into new Kafka topics for further consumption. For example a processing flow for article recommendation might crawl article content from RSS feeds and publish it to an "articles" topic; further processing might help normalize or deduplicate this content to a topic of cleaned article content; a final stage might attempt to match this content to users. This creates a graph of real-time data flow out of the individual topics. <a href="https://storm.apache.org/">Storm</a> and <a href="http://samza.apache.org/">Samza</a> are popular frameworks for implementing these kinds of transformations.
 
 <h4>Event Sourcing</h4>
 
@@ -36,4 +36,4 @@
 
 <h4>Commit Log</h4>
 
-Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data. The <a href="/documentation.html#compaction">log compaction</a> feature in Kafka helps support this usage. In this usage Kafka is similar to <a href="http://zookeeper.apache.org/bookkeeper/">Apache BookKeeper</a> project.
\ No newline at end of file
+Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data. The <a href="/documentation.html#compaction">log compaction</a> feature in Kafka helps support this usage. In this usage Kafka is similar to <a href="http://zookeeper.apache.org/bookkeeper/">Apache BookKeeper</a> project.
