Index: downloads.html
===================================================================
--- downloads.html	(revision 1226503)
+++ downloads.html	(working copy)
@@ -2,6 +2,18 @@
 
 <h2>Downloads</h2>
 
-We have not yet done an Apache release and Apache does not allow us to host non-Apache releases on this site. You can download the previous releases <a href="http://sna-projects.com/kafka/downloads.php">here</a>.
+<h3>0.7.0-incubating</h3>
+The current stable version is 0.7.0-incubating. See the <a href="http://people.apache.org/~nehanarkhede/kafka-0.7.0-incubating/RELEASE-NOTES.html">Release Notes</a>
 
+Be sure to verify your downloads by these <a href="http://www.apache.org/info/verification.html">procedures</a> using these <a href="http://svn.apache.org/repos/asf/incubator/kafka/KEYS">KEYS</a>. <br/>
+<br/>
+
+Download Source: <a href="http://people.apache.org/~nehanarkhede/kafka-0.7.0-incubating/kafka-0.7.0-incubating-src.tar.gz">kafka-0.7.0-incubating-src.tar.gz</a> (<a href="http://people.apache.org/~nehanarkhede/kafka-0.7.0-incubating/kafka-0.7.0-incubating-src.tar.gz.asc">asc</a>, <a href="http://people.apache.org/~nehanarkhede/kafka-0.7.0-incubating/kafka-0.7.0-incubating-src.tar.gz.md5">md5</a>)
+
+<h3>Previous non-incubator releases</h3>
+You can download the previous releases <a href="http://sna-projects.com/kafka/downloads.php">here</a>.
+
+<h3>DISCLAIMER</h3>
+Apache Kafka is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
+
 <!--#include virtual="includes/footer.html" -->
Index: quickstart.html
===================================================================
--- quickstart.html	(revision 1226503)
+++ quickstart.html	(working copy)
@@ -37,14 +37,14 @@
 A toy producer script is available to send plain text messages. To use it, run the following command:
 
 <pre>
-<b>&gt; bin/kafka-producer-shell.sh --server kafka://localhost:9092 --topic test</b>
+<b>&gt; bin/kafka-producer-shell.sh --props config/producer.properties --topic test</b>
 > hello
 sent: hello (14 bytes)
 > world
 sent: world (14 bytes)
 </pre>
 
-<h3>Step 5: Start a consumer</h3>
+<h3>Step 4: Start a consumer</h3>
 
 Start a toy consumer to dump out the messages you sent to the console:
 
@@ -58,7 +58,7 @@
 
 If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
 
-<h3>Step 6: Write some code</h3>
+<h3>Step 5: Write some code</h3>
 
 Below is some very simple examples of using Kafka for sending messages, more complete examples can be found in the Kafka source code in the examples/ directory.
 
@@ -100,7 +100,7 @@
 
 <h5>2. Producer API </h5>
 
-With release 0.6, we introduced a new producer API - <code>kafka.producer.Producer&lt;T&gt;</code>. Here are examples of using the producer -
+Here are examples of using the producer API - <code>kafka.producer.Producer&lt;T&gt;</code> -
 
 <ol>
 <li>First, start a local instance of the zookeeper server
@@ -209,12 +209,13 @@
 producer.send(data);	
 </pre>
 </li>
-<li>Use the asynchronous producer. This buffers writes in memory until either <code>batch.size</code> or <code>queue.time</code> is reached. After that, data is sent to the Kafka brokers
+<li>Use the asynchronous producer along with GZIP compression. This buffers writes in memory until either <code>batch.size</code> or <code>queue.time</code> is reached. After that, data is sent to the Kafka brokers
 <pre>
 Properties props = new Properties();
 props.put("zk.connect"‚ "127.0.0.1:2181");
 props.put("serializer.class", "kafka.serializer.StringEncoder");
 props.put("producer.type", "async");
+props.put("compression.codec", "1");
 ProducerConfig config = new ProducerConfig(props);
 Producer&lt;String, String&gt; producer = new Producer&lt;String, String&gt;(config);
 ProducerData&lt;String, String&gt; data = new ProducerData&lt;String, String&gt;("test-topic", "test-message");
@@ -295,10 +296,10 @@
 
   <small>// get the message set from the consumer and print them out</small>
   ByteBufferMessageSet messages = consumer.fetch(fetchRequest);
-  for(Message message : messages) {
-    System.out.println("consumed: " + Utils.toString(message.payload(), "UTF-8"));
+  for(MessageAndOffset msg : messages) {
+    System.out.println("consumed: " + Utils.toString(msg.message.payload(), "UTF-8"));
     <small>// advance the offset after consuming each message</small>
-    offset += MessageSet.entrySize(message);
+    offset = msg.offset;
   }
 }
 </pre>
Index: includes/header.html
===================================================================
--- includes/header.html	(revision 1226503)
+++ includes/header.html	(working copy)
@@ -26,7 +26,7 @@
 			<div class="lsidebar">
 				<ul>
 					<li><a href="downloads.html">download</a></li>
-					<li><a href="api-docs/0.6">api&nbsp;docs</a></li>
+					<li><a href="http://people.apache.org/~nehanarkhede/kafka-0.7.0-incubating/docs">api&nbsp;docs</a></li>
 					<li><a href="quickstart.html">quickstart</a></li>
 					<li><a href="design.html">design</a></li>
 					<li><a href="configuration.html">configuration</a></li>
Index: configuration.html
===================================================================
--- configuration.html	(revision 1226503)
+++ configuration.html	(working copy)
@@ -18,6 +18,11 @@
     <td>Each broker is uniquely identified by an id. This id serves as the brokers "name", and allows the broker to be moved to a different host/port without confusing consumers.</td>
 </tr>
 <tr>
+    <td><code>enable.zookeeper</code></td>
+    <td>true</td>
+    <td>enable zookeeper registration in the server</td>
+</tr>
+<tr>
      <td><code>log.flush.interval</code></td>
      <td>500</td>
      <td>Controls the number of messages accumulated in each topic (partition) before the data is flushed to disk and made available to consumers.</td>  
@@ -49,6 +54,11 @@
     <td>Topic-specific retention time that overrides <code>log.retention.hours</code>, e.g., topic1:10,topic2:20</td>
 </tr>
 <tr>
+    <td><code>log.retention.size</code></td>
+    <td>-1</td>
+    <td>the maximum size of the log before deleting it. This controls how large a log is allowed to grow</td>
+</tr>
+<tr>
     <td><code>log.cleanup.interval.mins</code></td>
     <td>10</td>
     <td>Controls how often the log cleaner checks logs eligible for deletion. A log file is eligible for deletion if it hasn't been modified for <code>log.retention.hours</code> hours.</td>
@@ -64,22 +74,42 @@
     <td>Controls the maximum size of a single log file.</td>
 </tr>
 <tr>
+    <td><code>max.socket.request.bytes<code></td>
+    <td>104857600</td>
+    <td>the maximum number of bytes in a socket request</td>
+</tr>
+<tr>
+    <td><code>monitoring.period.secs<code></td>
+    <td>600</td>
+    <td>the interval in which to measure performance statistics</td>
+</tr>
+<tr>
     <td><code>num.threads</code></td>
     <td>Runtime.getRuntime().availableProcessors</td>
     <td>Controls the number of worker threads in the broker to serve requests.</td>
 </tr>
 <tr>
-    <td><code>num.partitions</code> </td>
+    <td><code>num.partitions</code></td>
     <td>1</td>
     <td>Specifies the default number of partitions per topic.</td>
 </tr>
 <tr>
+    <td><code>socket.send.buffer</code></td>
+    <td>102400</td>
+    <td>the SO_SNDBUFF buffer of the socket sever sockets</td>
+</tr>
+<tr>
+    <td><code>socket.receive.buffer</code></td>
+    <td>102400</td>
+    <td>the SO_RCVBUFF buffer of the socket sever sockets</td>
+</tr>
+<tr>
     <td><code>topic.partition.count.map</code></td>
     <td>none</td>
     <td>Override parameter to control the number of partitions for selected topics. E.g., topic1:10,topic2:20</td>
 </tr>
 <tr>
-    <td><code>zk.connect</code> </td>
+    <td><code>zk.connect</code></td>
     <td>localhost:2182/kafka</td>
     <td>Specifies the zookeeper connection string in the form hostname:port/chroot. Here the chroot is a base directory which is prepended to all path operations (this effectively namespaces all kafka znodes to allow sharing with other applications on the same zookeeper cluster)</td>
 </tr>
@@ -168,6 +198,26 @@
     <td>-1</td>
     <td>By default, this value is -1 and a consumer blocks indefinitely if no new message is available for consumption. By setting the value to a positive integer, a timeout exception is thrown to the consumer if no message is available for consumption after the specified timeout value.</td>
 </tr>
+<tr>
+    <td><code>rebalance.retries.max</code> </td>
+    <td>4</td>
+    <td>max number of retries during rebalance</td>
+</tr>
+<tr>
+    <td><code>mirror.topics.whitelist</code></td>
+    <td>""</td>
+    <td>Whitelist of topics for this mirror's embedded consumer to consume. At most one of whitelist/blacklist may be specified.</td>
+</tr>
+<tr>
+    <td><code>mirror.topics.blacklist</code></td>
+    <td>""</td>
+    <td>Topics to skip mirroring. At most one of whitelist/blacklist may be specified</td>
+</tr>
+<tr>
+    <td><code>mirror.consumer.numthreads</code></td>
+    <td>4</td>
+    <td>The number of threads to be used per topic for the mirroring consumer, by default</td>
+</tr>
 </table>
 
 
@@ -270,7 +320,23 @@
     <td><code>max.message.size</code> </td>
     <td>1000000</td>
     <td>the maximum number of bytes that the kafka.producer.SyncProducer can send as a single message payload</td>
-</tr></table>
+</tr>
+<tr>
+    <td><code>compression.codec</code></td>
+    <td>0 (No compression)</td>
+    <td>This parameter allows you to specify the compression codec for all data generated by this producer.</td>
+</tr>
+<tr>
+    <td><code>compressed.topics</code></td>
+    <td>null</td>
+    <td>This parameter allows you to set whether compression should be turned on for particular topics. If the compression codec is anything other than NoCompressionCodec, enable compression only for specified topics if any. If the list of compressed topics is empty, then enable the specified compression codec for all topics. If the compression codec is NoCompressionCodec, compression is disabled for all topics. </td>
+</tr>
+<tr>
+    <td><code>zk.read.num.retries</code></td>
+    <td>3</td>
+    <td>The producer using the zookeeper software load balancer maintains a ZK cache that gets updated by the zookeeper watcher listeners. During some events like a broker bounce, the producer ZK cache can get into an inconsistent state, for a small time period. In this time period, it could end up picking a broker partition that is unavailable. When this happens, the ZK cache needs to be updated. This parameter specifies the number of times the producer attempts to refresh this ZK cache.</td>
+</tr>
+</table>
 
 
 <!--#include virtual="includes/footer.html" -->
