Index: src/main/webapp/user-guide/deploy.conf
===================================================================
--- src/main/webapp/user-guide/deploy.conf	(revision 1145271)
+++ src/main/webapp/user-guide/deploy.conf	(working copy)
@@ -1,12 +1,13 @@
 h1. Deploy Cellar
 
-This chapter described how to deploy and start Cellar into a running Apache Karaf instance. This chapter assumes that you already know the Apache Karaf basics, especially the notion of feature and shell usage.
+This chapter describes how to deploy and start Cellar into a running Apache Karaf instance. This chapter 
+assumes that you already know the Apache Karaf basics, especially the notion of feature and shell usage.
 
 h2. Registering Cellar features
 
 Karaf Cellar is provided as a Karaf features XML descriptor.
 
-Simply register Cellar feature URL in your Karaf instance:
+Simply register the Cellar feature URL in your Karaf instance:
 
 {code}
 karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.0-SNAPSHOT/xml/features
@@ -22,13 +23,13 @@
 
 h2. Starting Cellar
 
-To start Cellar in your Karaf instance, you only need to install the cellar feature:
+To start Cellar in your Karaf instance, you only need to install the Cellar feature:
 
 {code}
 karaf@root> features:install cellar
 {code}
 
-You can see the Cellar components (bundles) installed:
+You can now see the Cellar components (bundles) installed:
 
 {code}
 karaf@root> la|grep -i cellar
Index: src/main/webapp/user-guide/groups.conf
===================================================================
--- src/main/webapp/user-guide/groups.conf	(revision 1145271)
+++ src/main/webapp/user-guide/groups.conf	(working copy)
@@ -1,6 +1,8 @@
 h1. Cellar groups
 
-You can define groups in Cellar. It allows you to select nodes and resources involved in one group. Thanks to that, not all node 'sync' with each other.
+You can define groups in Cellar. A group allows you to define specific nodes and resources that are to be
+working together.  This permits some nodes (those outside the group) not to need to sync with changes of
+a node within a group.
 
 By default, the Cellar nodes go into the default group:
 
@@ -24,7 +26,7 @@
 
 {code}
 
-For now, the test group hasn't any member:
+For now, the test group hasn't any members:
 
 {code}
 kaaf@root> cluster:group-list
@@ -37,7 +39,7 @@
 
 h2. Group configuration
 
-You can see the configuration PID associated to a given group, for instance the default group:
+You can see the configuration PID associated with a given group, for instance the default group:
 
 {code}
 karaf@root> cluster:config-list default
@@ -97,7 +99,8 @@
 
 The node could be local or remote.
 
-Now, the members of a given group will inherit of all configuration defined in the group. It means that the node1 now knows the tstcfg configuration as test group member:
+Now, the members of a given group will inherit of all configuration defined in the group. This means that 
+node1 now knows the tstcfg configuration because it's a test group member:
 
 {code}
 karaf@root> config:edit tstcfg
Index: src/main/webapp/user-guide/installation.conf
===================================================================
--- src/main/webapp/user-guide/installation.conf	(revision 1145271)
+++ src/main/webapp/user-guide/installation.conf	(working copy)
@@ -6,14 +6,15 @@
 
 As Cellar is a Karaf sub-project, you need a running Karaf instance.
 
-Karaf Cellar is provided under a Karaf features descriptor. The easiest way to install is just to have an internet connection from the Karaf running instance.
+Karaf Cellar is provided under a Karaf features descriptor. The easiest way to install is just to 
+have an internet connection from the Karaf running instance.
 
 h2. Building from Sources
 
 If you intend to build Karaf Cellar from the sources, the requirements are:
 
 *Hardware:*
-* 100MB of free disk space for the Apache Karaf Cellar x.y source distributions or SVN checkout, the Maven build and the dependencies Maven downloads.
+* 100MB of free disk space for the Apache Karaf Cellar x.y source distributions or SVN checkout, the Maven build and the dependencies that Maven downloads.
 
 *Environment:*
 * Java SE Developement Kit 1.6.x or greater ([http://www.oracle.com/technetwork/java/javase/]).
Index: src/main/webapp/user-guide/nodes.conf
===================================================================
--- src/main/webapp/user-guide/nodes.conf	(revision 1145271)
+++ src/main/webapp/user-guide/nodes.conf	(working copy)
@@ -1,10 +1,11 @@
 h1. Cellar nodes
 
-This chapter describes the Cellar nodes manipulation command.
+This chapter describes the Cellar nodes manipulation commands.
 
 h2. Nodes identification
 
-When you installed Cellar feature, your Karaf instance became automatically a Cellar cluster node, and try to discover the others Cellar nodes.
+When you installed the Cellar feature, your Karaf instance became automatically a Cellar cluster node, 
+and hence tries to discover the others Cellar nodes.
 
 You can list the known Cellar nodes using the list-nodes command:
 
@@ -15,7 +16,7 @@
      2 node2.local       5702 node2.local:5702
 {code}
 
-The starting * indicates that it's the Karaf instance on which you are log on (the local node).
+The starting * indicates that it's the Karaf instance on which you are logged on (the local node).
 
 h2. Testing nodes
 
@@ -48,6 +49,6 @@
 [installed  ] [3.0.0-SNAPSHOT ] eventadmin                    karaf-3.0.0-SNAPSHOT
 {code}
 
-Features uninstall works in the same way. Basically, Cellar sync is completely transparent.
+Features uninstall works in the same way. Basically, Cellar synchronization is completely transparent.
 
-Configuration is also sync.
+Configuration is also synchonized.
Index: src/main/webapp/architecture-guide/hazelcast.conf
===================================================================
--- src/main/webapp/architecture-guide/hazelcast.conf	(revision 1145271)
+++ src/main/webapp/architecture-guide/hazelcast.conf	(working copy)
@@ -1,12 +1,17 @@
 h1. The role of Hazelcast
 
-The idea behind the clustering engine is that for each unit that we want to replicate, we create an event, broadcast the event to the cluster and hold the unit state to a shared resource, so that the rest of the nodes can look up and retrieve the changes.
+The idea behind the clustering engine is that for each unit that we want to replicate, we create an event, 
+broadcast the event to the cluster and hold the unit state to a shared resource, so that the rest of the 
+nodes can look up and retrieve the changes.
 
 !/images/shared_architecture.jpg!
 
-For instance, we want all nodes in our cluster to share configuration for PIDs a.b.c and x.y.z. On node "Karaf A" a change occurs on a.b.c. "Karaf A" updated the shared repository data for a.b.c and then notifies the rest of the nodes that a.b.c has changed. Each node looks up the shared repository and retrieves changes.
+For instance, we want all nodes in our cluster to share configuration for PIDs a.b.c and x.y.z. On node 
+"Karaf A" a change occurs on a.b.c. "Karaf A" updated the shared repository data for a.b.c and then notifies 
+the rest of the nodes that a.b.c has changed. Each node looks up the shared repository and retrieves changes.
 
-The architecture as described so far could be implemented using a database/shared filesystem as a shared resource and polling instead of multicasting events. So why use Hazelcast ?
+The architecture as described so far could be implemented using a database/shared filesystem as a shared
+resource and polling instead of multicasting events. So why use Hazelcast ?
 
 Hazelcast fits in perfectly because it offers:
 
@@ -19,4 +24,5 @@
 * Provides distributed topics
 ** Using in memory distributed topics allows us to broadcast events/commands which are valuable for management and monitoring.
 
-In other words, Hazelcast allows us to setup a cluster with zero configuration and no dependency to external systems such as a database or a shared file system.
+In other words, Hazelcast allows us to setup a cluster with zero configuration and no dependency to external
+systems such as a database or a shared file system.
Index: src/main/webapp/architecture-guide/broadcasting_commands.conf
===================================================================
--- src/main/webapp/architecture-guide/broadcasting_commands.conf	(revision 1145271)
+++ src/main/webapp/architecture-guide/broadcasting_commands.conf	(working copy)
@@ -1,6 +1,11 @@
 h1. Broadcasting commands
 
-Commands are a special kind of events. They imply that when they are handled, a Result event will be fire containing the outcome of the command. For each command, we have one result per recipient. Each command contains an unique id (unique for all cluster nodes, create from Hazelcast). This id is used to correlate the request with the result. For each result successfully correlated the result is added to list of results on the command object. If the list gets full of if 10 seconds from the command execution have elapsed, the list is moved to a blocking queue from which the result can be retrieved.
+Commands are a special kind of event. They imply that when they are handled, a Result event will be fired 
+containing the outcome of the command. For each command, we have one result per recipient. Each command 
+contains an unique id (unique for all cluster nodes, created from Hazelcast). This id is used to correlate 
+the request with the result. For each result successfully correlated the result is added to list of results
+on the command object. If the list gets full or if 10 seconds from the command execution have elapsed, the 
+list is moved to a blocking queue from which the result can be retrieved.
 
 The following code snippet shows what happens when a command is sent for execution:
 
Index: src/main/webapp/architecture-guide/overview.conf
===================================================================
--- src/main/webapp/architecture-guide/overview.conf	(revision 1145271)
+++ src/main/webapp/architecture-guide/overview.conf	(working copy)
@@ -1,9 +1,15 @@
 h1. Architecture Overview
 
-The core concept behind Karaf Cellar is that each node can be a part of one or more groups, that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest group members.
+The core concept behind Karaf Cellar is that each node can be a part of one or more groups that 
+provide the node distributed memory for keeping data (e.g. configuration, features information, other) 
+and a topic which is used to exchange events with the rest of the group members.
 
 !/images/architecture.png!
 
-Each group comes with a configuration, which defines which events are to be broadcasted and which are not. Whenever a local change occurs to a node, the node will read the setup information of all the groups that i belongs to and broadcast the event to the groups that whiteless the specific event.
+Each group comes with a configuration, which defines which events are to be broadcast and which are
+not. Whenever a local change occurs to a node, the node will read the setup information of all the 
+groups that it belongs to and broadcasts the event to the groups that are whitelisted to the specific event.
 
-The broadcast operation is happening via the distributed topic provided by the group. For the groups that the broadcast is supported, the distributed configuration data will be updated so that nodes that join in the future can pickup the change.
+The broadcast operation happens via the distributed topic provided by the group. For the groups
+that the broadcast reaches, the distributed configuration data will be updated so that nodes
+that join in the future can pickup the change.
Index: src/main/webapp/architecture-guide/design.conf
===================================================================
--- src/main/webapp/architecture-guide/design.conf	(revision 1145271)
+++ src/main/webapp/architecture-guide/design.conf	(working copy)
@@ -13,8 +13,13 @@
 
 !/images/event_flow.jpg!
 
-The OSGi specification uses Events and Listener paradigm in a lot of situations (e.g. ConfigurationChangeEvent and ConfigurationListener). By implementing such Listener and expose it as an OSGi service to the Service Registry, we are sure that we "listen" for the interesting events.
+The OSGi specification uses the Events and Listener paradigm in many situations (e.g. ConfigurationChangeEvent 
+and ConfigurationListener). By implementing such a Listener and exposing it as an OSGi service to the Service 
+Registry, we can be sure that we are "listening" for the interesting events.
 
-When the listener is notified of an event, it forwards the Event object to a Hazelcazst distributed topic. To keep things as simple as possible, we keep a single topic for all event types. Each node has a listener registered on that topic and gets/sends all events to the event dispatcher.
+When the listener is notified of an event, it forwards the Event object to a Hazelcast distributed topic. To 
+keep things as simple as possible, we keep a single topic for all event types. Each node has a listener 
+registered on that topic and gets/sends all events to the event dispatcher.
 
-When the Event Dispatcher receives an event, it looks up an internal registry (in our case the OSGi Service Registry) to find an Event Handler that can handle the received Event. The handler found receives the event and processes it.
+When the Event Dispatcher receives an event, it looks up an internal registry (in our case the OSGi Service Registry)
+to find an Event Handler that can handle the received event. The handler found receives the event and processes it.
Index: src/main/webapp/architecture-guide/supported_events.conf
===================================================================
--- src/main/webapp/architecture-guide/supported_events.conf	(revision 1145271)
+++ src/main/webapp/architecture-guide/supported_events.conf	(working copy)
@@ -6,10 +6,14 @@
 * Features repository added/removed event.
 * Features installed/uninstalled event.
 
-For each of the vent types above a group may be configured to enabled synchronization, and to provide a whitelis/blacklist of specific event ids.
+For each of the event types above a group may be configured to enable synchronization, and to provide 
+a whitelist/blacklist of specific event IDs.
 
-For instance, the default group is configured to allow synchronization of configuration. This means that whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed memory of the default group and will also be boardcasted to all other default group members using the topic.
+For instance, the default group is configured to allow synchronization of configuration. This means that 
+whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed
+memory of the default group and will also be broadcasted to all other default group members using the topic.
 
-This is happening for all PIDs but not for org.apache.karaf.cellar.node which is marked as blacklisted and will never be written or read from the distributed memory, nor will boardcasted via the topic.
+This happens for all PIDs but not for org.apache.karaf.cellar.node which is marked as blacklisted 
+and will never be written or read from the distributed memory, nor will broadcast via the topic.
 
 The user can add/remove any PID he wishes to the whitelist/blacklist.
