Index: content/resources/docs/1.99.4/BuildingSqoop2.html =================================================================== --- content/resources/docs/1.99.4/BuildingSqoop2.html (revision 0) +++ content/resources/docs/1.99.4/BuildingSqoop2.html (working copy) @@ -0,0 +1,101 @@ + + + + + + + + + + Building Sqoop2 from source code — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Building Sqoop2 from source code

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Building Sqoop2 from source code

+

This guide will show you how to build Sqoop2 from source code. Sqoop is using maven as build system. You you will need to use at least version 3.0 as older versions will not work correctly. All other dependencies will be downloaded by maven automatically. With exception of special JDBC drivers that are needed only for advanced integration tests.

+
+

Downloading source code

+

Sqoop project is using git as a revision control system hosted at Apache Software Foundation. You can clone entire repository using following command:

+
git clone https://git-wip-us.apache.org/repos/asf/sqoop.git sqoop2
+
+
+

Sqoop2 is currently developed in special branch sqoop2 that you need to check out after clone:

+
cd sqoop2
+git checkout sqoop2
+
+
+
+
+

Building project

+

You can use usual maven targets like compile or package to build the project. Sqoop supports two major Hadoop revisions at the moment - 1.x and 2.x. As compiled code for one Hadoop major version can’t be used on another, you must compile Sqoop against appropriate Hadoop version. You can change the target Hadoop version by specifying -Dhadoop.profile=$hadoopVersion on the maven command line. Possible values of $hadoopVersions are 100 and 200 for Hadoop version 1.x and 2.x respectively. Sqoop will compile against Hadoop 2 by default. Following example will compile Sqoop against Hadoop 1.x:

+
mvn compile -Dhadoop.profile=100
+
+
+

Maven target package can be used to create Sqoop packages similar to the ones that are officially available for download. Sqoop will build only source tarball by default. You need to specify -Pbinary to build binary distribution. You might need to explicitly specify Hadoop version if the default is not accurate.

+
mvn package -Pbinary
+
+
+
+
+

Running tests

+

Sqoop supports two different sets of tests. First smaller and much faster set is called unit tests and will be executed on maven target test. Second larger set of integration tests will be executed on maven target integration-test. Please note that integration tests might require manual steps for installing various JDBC drivers into your local maven cache.

+

Example for running unit tests:

+
mvn test
+
+
+

Example for running integration tests:

+
mvn integration-test
+
+
+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/ClientAPI.html =================================================================== --- content/resources/docs/1.99.4/ClientAPI.html (revision 0) +++ content/resources/docs/1.99.4/ClientAPI.html (working copy) @@ -0,0 +1,370 @@ + + + + + + + + + + Sqoop Java Client API Guide — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Sqoop Java Client API Guide

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Sqoop Java Client API Guide

+

This document will explain how to use Sqoop Java Client API with external application. Client API allows you to execute the functions of sqoop commands. It requires Sqoop Client JAR and its dependencies.

+

The main class that provides wrapper methods for all the supported operations is the

+
public class SqoopClient {
+  ...
+}
+
+
+

Java Client API is explained using Generic JDBC Connector example. Before executing the application using the sqoop client API, check whether sqoop server is running.

+
+

Workflow

+

Given workflow has to be followed for executing a sqoop job in Sqoop server.

+
+
    +
  1. Create LINK object for a given connectorId - Creates Link object and returns linkId (lid)
  2. +
  3. Create a JOB for a given “from” and “to” linkId - Create Job object and returns jobId (jid)
  4. +
  5. Start the JOB for a given jobId - Start Job on the server and creates a submission record
  6. +
+
+
+
+

Project Dependencies

+

Here given maven dependency

+
<dependency>
+  <groupId>org.apache.sqoop</groupId>
+    <artifactId>sqoop-client</artifactId>
+    <version>${requestedVersion}</version>
+</dependency>
+
+
+
+
+

Initialization

+

First initialize the SqoopClient class with server URL as argument.

+
String url = "http://localhost:12000/sqoop/";
+SqoopClient client = new SqoopClient(url);
+
+
+

Server URL value can be modfied by setting value to setServerUrl(String) method

+
client.setServerUrl(newUrl);
+
+
+
+ +
+

Job

+

A sqoop job holds the From and To parts for transferring data from the From data source to the To data source. Both the From and the To are uniquely identified by their corresponding connector Link Ids. i.e when creating a job we have to specifiy the FromLinkId and the ToLinkId. Thus the pre-requisite for creating a job is to first create the links as described above.

+

Once the linkIds for the From and To are given, then the job configs for the associated connector for the link object have to be filled. You can get the list of all the from and to job config/inputs using Display Config and Input Names For Connector for that connector. A connector can have one or more links. We then use the links in the From and To direction to populate the corresponding MFromConfig and MToConfig respectively.

+

In addition to filling the job configs for the From and the To representing the link, we also need to fill the driver configs that control the job execution engine environment. For example, if the job execution engine happens to be the MapReduce we will specifiy the number of mappers to be used in reading data from the From data source.

+
+

Save Job

+

Here is the code to create and then save a job

+
String url = "http://localhost:12000/sqoop/";
+SqoopClient client = new SqoopClient(url);
+//Creating dummy job object
+long fromLinkId = 1;// for jdbc connector
+long toLinkId = 2; // for HDFS connector
+MJob job = client.createJob(fromLinkId, toLinkId);
+job.setName("Vampire");
+job.setCreationUser("Buffy");
+// set the "FROM" link job config values
+MFromConfig fromJobConfig = job.getFromJobConfig();
+fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop");
+fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop");
+fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id");
+// set the "TO" link job config values
+MToConfig toJobConfig = job.getToJobConfig();
+toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/usr/tmp");
+// set the driver config values
+MDriverConfig driverConfig = job.getDriverConfig();
+driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3");
+
+Status status = client.saveJob(job);
+if(status.canProceed()) {
+ System.out.println("Created Job with Job Id: "+ job.getPersistenceId());
+} else {
+ System.out.println("Something went wrong creating the job");
+}
+
+
+

User can retrieve a job using the following methods

+ ++++ + + + + + + + + + + + + + +
MethodDescription
getJob(jid)Returns a job by id
getJobs()Returns list of jobs in the sqoop
+
+
+

List of status codes

+ ++++ + + + + + + + + + + + + + + + + +
FunctionDescription
OKThere are no issues, no warnings.
WARNINGValidated entity is correct enough to be proceed. Not a fatal error
ERRORThere are serious issues with validated entity. We can’t proceed until reported issues will be resolved.
+
+
+

View Error or Warning valdiation message

+

In case of any WARNING AND ERROR status, user has to iterate the list of validation messages.

+
printMessage(link.getConnectorLinkConfig().getConfigs());
+
+private static void printMessage(List<MConfig> configs) {
+  for(MConfig config : configs) {
+    List<MInput<?>> inputlist = config.getInputs();
+    if (config.getValidationMessages() != null) {
+     // print every validation message
+     for(Message message : config.getValidationMessages()) {
+      System.out.println("Config validation message: " + message.getMessage());
+     }
+    }
+    for (MInput minput : inputlist) {
+      if (minput.getValidationStatus() == Status.WARNING) {
+       for(Message message : config.getValidationMessages()) {
+        System.out.println("Config Input Validation Warning: " + message.getMessage());
+      }
+    }
+    else if (minput.getValidationStatus() == Status.ERROR) {
+      for(Message message : config.getValidationMessages()) {
+       System.out.println("Config Input Validation Error: " + message.getMessage());
+      }
+     }
+    }
+   }
+
+
+
+ +
+
+

Job Start

+

Starting a job requires a job id. On successful start, getStatus() method returns “BOOTING” or “RUNNING”.

+
//Job start
+long jobId = 1;
+MSubmission submission = client.startJob(jobId);
+System.out.println("Job Submission Status : " + submission.getStatus());
+if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
+  System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
+}
+System.out.println("Hadoop job id :" + submission.getExternalId());
+System.out.println("Job link : " + submission.getExternalLink());
+Counters counters = submission.getCounters();
+if(counters != null) {
+  System.out.println("Counters:");
+  for(CounterGroup group : counters) {
+    System.out.print("\t");
+    System.out.println(group.getName());
+    for(Counter counter : group) {
+      System.out.print("\t\t");
+      System.out.print(counter.getName());
+      System.out.print(": ");
+      System.out.println(counter.getValue());
+    }
+  }
+}
+if(submission.getExceptionInfo() != null) {
+  System.out.println("Exception info : " +submission.getExceptionInfo());
+}
+
+
+//Check job status for a running job
+MSubmission submission = client.getJobStatus(jobId);
+if(submission.getStatus().isRunning() && submission.getProgress() != -1) {
+  System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100));
+}
+
+//Stop a running job
+submission.stopJob(jobId);
+
+
+

Above code block, job start is asynchronous. For synchronous job start, use startJob(jid, callback, pollTime) method. If you are not interested in getting the job status, then invoke the same method with “null” as the value for the callback parameter and this returns the final job status. pollTime is the request interval for getting the job status from sqoop server and the value should be greater than zero. We will frequently hit the sqoop server if a low value is given for the pollTime. When a synchronous job is started with a non null callback, it first invokes the callback’s submitted(MSubmission) method on successful start, after every poll time interval, it then invokes the updated(MSubmission) method on the callback API and finally on finishing the job executuon it invokes the finished(MSubmission) method on the callback API.

+
+
+

Display Config and Input Names For Connector

+

You can view the config/input names for the link and job config types per connector

+
String url = "http://localhost:12000/sqoop/";
+SqoopClient client = new SqoopClient(url);
+long connectorId = 1;
+// link config for connector
+describe(client.getConnector(connectorId).getLinkConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+// from job config for connector
+describe(client.getConnector(connectorId).getFromConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+// to job config for the connector
+describe(client.getConnector(connectorId).getToConfig().getConfigs(), client.getConnectorConfigBundle(connectorId));
+
+void describe(List<MConfig> configs, ResourceBundle resource) {
+  for (MConfig config : configs) {
+    System.out.println(resource.getString(config.getLabelKey())+":");
+    List<MInput<?>> inputs = config.getInputs();
+    for (MInput input : inputs) {
+      System.out.println(resource.getString(input.getLabelKey()) + " : " + input.getValue());
+    }
+    System.out.println();
+  }
+}
+
+
+

Above Sqoop 2 Client API tutorial explained how to create a link, create job and and then start the job.

+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/CommandLineClient.html =================================================================== --- content/resources/docs/1.99.4/CommandLineClient.html (revision 0) +++ content/resources/docs/1.99.4/CommandLineClient.html (working copy) @@ -0,0 +1,856 @@ + + + + + + + + + + Command Line Shell — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Command Line Shell

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Command Line Shell

+

Sqoop 2 provides command line shell that is capable of communicating with Sqoop 2 server using REST interface. Client is able to run in two modes - interactive and batch mode. Commands create, update and clone are not currently supported in batch mode. Interactive mode supports all available commands.

+

You can start Sqoop 2 client in interactive mode using command sqoop2-shell:

+
sqoop2-shell
+
+
+

Batch mode can be started by adding additional argument representing path to your Sqoop client script:

+
sqoop2-shell /path/to/your/script.sqoop
+
+
+

Sqoop client script is expected to contain valid Sqoop client commands, empty lines and lines starting with # that are denoting comment lines. Comments and empty lines are ignored, all other lines are interpreted. Example script:

+
# Specify company server
+set server --host sqoop2.company.net
+
+# Executing given job
+start job  --jid 1
+
+
+ +
+

Resource file

+

Sqoop 2 client have ability to load resource files similarly as other command line tools. At the beginning of execution Sqoop client will check existence of file .sqoop2rc in home directory of currently logged user. If such file exists, it will be interpreted before any additional actions. This file is loaded in both interactive and batch mode. It can be used to execute any batch compatible commands.

+

Example resource file:

+
# Configure our Sqoop 2 server automatically
+set server --host sqoop2.company.net
+
+# Run in verbose mode by default
+set option --name verbose --value true
+
+
+
+
+

Commands

+

Sqoop 2 contains several commands that will be documented in this section. Each command have one more functions that are accepting various arguments. Not all commands are supported in both interactive and batch mode.

+
+

Auxiliary Commands

+

Auxiliary commands are commands that are improving user experience and are running purely on client side. Thus they do not need working connection to the server.

+
    +
  • exit Exit client immediately. This command can be also executed by sending EOT (end of transmission) character. It’s CTRL+D on most common Linux shells like Bash or Zsh.
  • +
  • history Print out command history. Please note that Sqoop client is saving history from previous executions and thus you might see commands that you’ve executed in previous runs.
  • +
  • help Show all available commands with short in-shell documentation.
  • +
+
sqoop:000> help
+For information about Sqoop, visit: http://sqoop.apache.org/
+
+Available commands:
+  exit    (\x  ) Exit the shell
+  history (\H  ) Display, manage and recall edit-line history
+  help    (\h  ) Display this help message
+  set     (\st ) Configure various client options and settings
+  show    (\sh ) Display various objects and configuration options
+  create  (\cr ) Create new object in Sqoop repository
+  delete  (\d  ) Delete existing object in Sqoop repository
+  update  (\up ) Update objects in Sqoop repository
+  clone   (\cl ) Create new object based on existing one
+  start   (\sta) Start job
+  stop    (\stp) Stop job
+  status  (\stu) Display status of a job
+  enable  (\en ) Enable object in Sqoop repository
+  disable (\di ) Disable object in Sqoop repository
+
+
+
+
+

Set Command

+

Set command allows to set various properties of the client. Similarly as auxiliary commands, set do not require connection to Sqoop server. Set commands is not used to reconfigure Sqoop server.

+

Available functions:

+ ++++ + + + + + + + + + + + + + +
FunctionDescription
serverSet connection configuration for server
optionSet various client side options
+
+

Set Server Function

+

Configure connection to Sqoop server - host port and web application name. Available arguments:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
ArgumentDefault valueDescription
-h, --hostlocalhostServer name (FQDN) where Sqoop server is running
-p, --port12000TCP Port
-w, --webappsqoopTomcat’s web application name
-u, --url Sqoop Server in url format
+

Example:

+
set server --host sqoop2.company.net --port 80 --webapp sqoop
+
+
+

or

+
set server --url http://sqoop2.company.net:80/sqoop
+
+
+

Note: When --url option is given, --host, --port or --webapp option will be ignored.

+
+
+

Set Option Function

+

Configure Sqoop client related options. This function have two required arguments name and value. Name represents internal property name and value holds new value that should be set. List of available option names follows:

+ +++++ + + + + + + + + + + + + + + + + +
Option nameDefault valueDescription
verbosefalseClient will print additional information if verbose mode is enabled
poll-timeout10000Server poll timeout in milliseconds
+

Example:

+
set option --name verbose --value true
+set option --name poll-timeout --value 20000
+
+
+
+
+
+

Show Command

+

Show commands displays various information as described below.

+

Available functions:

+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FunctionDescription
serverDisplay connection information to the sqoop server (host, port, webapp)
optionDisplay various client side options
versionShow client build version, with an option -all it shows server build version and supported api versions
connectorShow connector configurable and its related configs
driverShow driver configurable and its related configs
linkShow links in sqoop
jobShow jobs in sqoop
+
+

Show Server Function

+

Show details about connection to Sqoop server.

+ ++++ + + + + + + + + + + + + + + + + + + + +
ArgumentDescription
-a, --allShow all connection related information (host, port, webapp)
-h, --hostShow host
-p, --portShow port
-w, --webappShow web application name
+

Example:

+
show server --all
+
+
+
+
+

Show Option Function

+

Show values of various client side options. This function will show all client options when called without arguments.

+ ++++ + + + + + + + + + + +
ArgumentDescription
-n, --nameShow client option value with given name
+

Please check table in Set Option Function section to get a list of all supported option names.

+

Example:

+
show option --name verbose
+
+
+
+
+

Show Version Function

+

Show build versions of both client and server as well as the supported rest api versions.

+ ++++ + + + + + + + + + + + + + + + + + + + +
ArgumentDescription
-a, --allShow all versions (server, client, api)
-c, --clientShow client build version
-s, --serverShow server build version
-p, --apiShow supported api versions
+

Example:

+
show version --all
+
+
+
+
+

Show Connector Function

+

Show persisted connector configurable and its related configs used in creating associated link and job objects

+ ++++ + + + + + + + + + + + + + +
ArgumentDescription
-a, --allShow information for all connectors
-c, --cid <x>Show information for connector with id <x>
+

Example:

+
show connector --all or show connector
+
+
+
+
+

Show Driver Function

+

Show persisted driver configurable and its related configs used in creating job objects

+

This function do not have any extra arguments. There is only one registered driver in sqoop

+

Example:

+
show driver
+
+
+
+ +
+

Show Job Function

+

Show persisted job objects.

+ ++++ + + + + + + + + + + + + + +
ArgumentDescription
-a, --allShow all available jobs
-j, --jid <x>Show job with id <x>
+

Example:

+
show job --all or show job
+
+
+
+
+

Show Submission Function

+

Show persisted job submission objects.

+ ++++ + + + + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Show available submissions for given job
-d, --detailShow job submissions in full details
+

Example:

+
show submission
+show submission --jid 1
+show submission --jid 1 --detail
+
+
+
+
+
+

Create Command

+

Creates new link and job objects. This command is supported only in interactive mode. It will ask user to enter the link config and job configs for from /to and driver when creating link and job objects respectively.

+

Available functions:

+ ++++ + + + + + + + + + + + + + +
FunctionDescription
linkCreate new link object
jobCreate new job object
+ +
+

Create Job Function

+

Create new job object.

+ ++++ + + + + + + + + + + + + + +
ArgumentDescription
-f, --from <x>Create new job object with a FROM link with id <x>
-t, --to <t>Create new job object with a TO link with id <x>
+

Example:

+
create job --from 1 --to 2 or create job --f 1 --t 2
+
+
+
+
+
+

Update Command

+

Update commands allows you to edit link and job objects. This command is supported only in interactive mode.

+ +
+

Update Job Function

+

Update existing job object.

+ ++++ + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Update existing job object with id <x>
+

Example:

+
update job --jid 1
+
+
+
+
+
+

Delete Command

+

Deletes link and job objects from Sqoop server.

+ +
+

Delete Job Function

+

Delete existing job object.

+ ++++ + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Delete job object with id <x>
+

Example:

+
delete job --jid 1
+
+
+
+
+
+

Clone Command

+

Clone command will load existing link or job object from Sqoop server and allow user in place updates that will result in creation of new link or job object. This command is not supported in batch mode.

+ +
+

Clone Job Function

+

Clone existing job object.

+ ++++ + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Clone job object with id <x>
+

Example:

+
clone job --jid 1
+
+
+
+
+
+

Start Command

+

Start command will begin execution of an existing Sqoop job.

+
+

Start Job Function

+

Start job (submit new submission). Starting already running job is considered as invalid operation.

+ ++++ + + + + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Start job with id <x>
-s, --synchronousSynchoronous job execution
+

Example:

+
start job --jid 1
+start job --jid 1 --synchronous
+
+
+
+
+
+

Stop Command

+

Stop command will interrupt an job execution.

+
+

Stop Job Function

+

Interrupt running job.

+ ++++ + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Interrupt running job with id <x>
+

Example:

+
stop job --jid 1
+
+
+
+
+
+

Status Command

+

Status command will retrieve the last status of a job.

+
+

Status Job Function

+

Retrieve last status for given job.

+ ++++ + + + + + + + + + + +
ArgumentDescription
-j, --jid <x>Retrieve status for job with id <x>
+

Example:

+
status job --jid 1
+
+
+
+
+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/ConnectorDevelopment.html =================================================================== --- content/resources/docs/1.99.4/ConnectorDevelopment.html (revision 0) +++ content/resources/docs/1.99.4/ConnectorDevelopment.html (working copy) @@ -0,0 +1,468 @@ + + + + + + + + + + Sqoop 2 Connector Development — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Sqoop 2 Connector Development

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Sqoop 2 Connector Development

+

This document describes how to implement a connector in the Sqoop 2 using the code sample from one of the built-in connectors ( GenericJdbcConnector ) as a reference. Sqoop 2 jobs support extraction from and/or loading to different data sources. Sqoop 2 connectors encapsulate the job lifecyle operations for extracting and/or loading data from and/or to +different data sources. Each connector will primarily focus on a particular data source and its custom implementation for optimally reading and/or writing data in a distributed environment.

+ +
+

What is a Sqoop Connector?

+

Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The connector implementation will provide logic to read from and/or write to a data source that it represents. For instance the ( GenericJdbcConnector ) encapsulates the logic to read from and/or write to jdbc enabled relational data sources. The connector part that enables reading from a data source and transferring this data to internal Sqoop format is called the FROM and the part that enables writng data to a data source by transferring data from Sqoop format is called TO. In order to interact with these data sources, the connector will provide one or many config classes and input fields within it.

+

Broadly we support two main config types for connectors, link type represented by the enum ConfigType.LINK and job type represented by the enum ConfigType.JOB. Link config represents the properties to physically connect to the data source. Job config represent the properties that are required to invoke reading from and/or writing to particular dataset in the data source it connects to. If a connector supports both reading from and writing to, it will provide the FromJobConfig and ToJobConfig objects. Each of these config objects are custom to each connector and can have one or more inputs associated with each of the Link, FromJob and ToJob config types. Hence we call the connectors as configurables i.e an entity that can provide configs for interacting with the data source it represents. As the connectors evolve over time to support new features in their data sources, the configs and inputs will change as well. Thus the connector API also provides methods for upgrading the config and input names and data related to these data sources across different versions.

+

The connectors implement logic for various stages of the extract/load process using the connector API described below. While extracting/reading data from the data-source the main stages are Initializer, Partitioner, Extractor and Destroyer. While loading/writitng data to the data source the main stages currently supported are Initializer, Loader and Destroyer. Each stage has its unique set of responsibilities that are explained in detail below. Since connectors understand the internals of the data source they represent, they work in tandem with the sqoop supported execution engines such as MapReduce or Spark (in future) to accomplish this process in a most optimal way.

+
+

When do we add a new connector?

+

You add a new connector when you need to extract/read data from a new data source, or load/write +data into a new data source that is not supported yet in Sqoop 2. +In addition to the connector API, Sqoop 2 also has an submission and execution engine interface. +At the moment the only supported engine is MapReduce, but we may support additional engines in the future such as Spark. Since many parallel execution engines are capable of reading/writing data, there may be a question of whether adding support for a new data source should be done through the connector or the execution engine API.

+

Our guideline are as follows: Connectors should manage all data extract(reading) from and/or load(writing) into a data source. Submission and execution engine together manage the job submission and execution life cycle to read/write data from/to data sources in the most optimal way possible. If you need to support a new data store and details of linking to it and don’t care how the process of reading/writing from/to happens then you are looking to add a connector and you should continue reading the below Connector API details to contribute new connectors to Sqoop 2.

+
+
+
+

Connector Implementation

+

The SqoopConnector class defines an API for the connectors that must be implemented by the connector developers. Each Connector must extend SqoopConnector and override the methods shown below.

+
public abstract String getVersion();
+public abstract ResourceBundle getBundle(Locale locale);
+public abstract Class getLinkConfigurationClass();
+public abstract Class getJobConfigurationClass(Direction direction);
+public abstract From getFrom();
+public abstract To getTo();
+public abstract ConnectorConfigurableUpgrader getConfigurableUpgrader()
+
+
+

Connectors can optionally override the following methods:

+
public List<Direction> getSupportedDirections();
+public Class<? extends IntermediateDataFormat<?>> getIntermediateDataFormat()
+
+
+

The getFrom method returns From instance +which is a Transferable entity that encapsulates the operations +needed to read from the data source that the connector represents.

+

The getTo method returns To instance +which is a Transferable entity that encapsulates the operations +needed to write to the data source that the connector represents.

+

Methods such as getBundle , getLinkConfigurationClass , getJobConfigurationClass +are related to Configurations

+

Since a connector represents a data source and it can support one of the two directions, either reading FROM its data source or writing to its data souurce or both, the getSupportedDirections method returns a list of directions that a connector will implement. This should be a subset of the values in the Direction enum we provide:

+
public List<Direction> getSupportedDirections() {
+    return Arrays.asList(new Direction[]{
+        Direction.FROM,
+        Direction.TO
+    });
+}
+
+
+
+

From

+

The getFrom method returns From instance which is a Transferable entity that encapsulates the operations needed to read from the data source the connector represents. The built-in GenericJdbcConnector defines From like this.

+
private static final From FROM = new From(
+      GenericJdbcFromInitializer.class,
+      GenericJdbcPartitioner.class,
+      GenericJdbcExtractor.class,
+      GenericJdbcFromDestroyer.class);
+...
+
+@Override
+public From getFrom() {
+  return FROM;
+}
+
+
+
+

Initializer and Destroyer

+

Initializer is instantiated before the submission of sqoop job to the execution engine and doing preparations such as connecting to the data source, creating temporary tables or adding dependent jar files. Initializers are executed as the first step in the sqoop job lifecyle. Here is the Initializer API.

+
public abstract void initialize(InitializerContext context, LinkConfiguration linkConfiguration,
+    JobConfiguration jobConfiguration);
+
+public List<String> getJars(InitializerContext context, LinkConfiguration linkConfiguration,
+    JobConfiguration jobConfiguration);
+
+public abstract Schema getSchema(InitializerContext context, LinkConfiguration linkConfiguration,
+    JobConfiguration jobConfiguration);
+
+
+

In addition to the initialize() method where the job execution preparation activities occur, the Initializer must also implement the getSchema() method for the direction it supports. The getSchema() method is used by the sqoop system to match the data extracted/read by the From instance of connector data source with the data loaded/written to the To instance of the connector data source. In case of a relational database or columnar database, the returned Schema object will include collection of columns with their data types. If the data source is schema-less, such as a file, an empty Schema can be returned (i.e a Schema object without any columns).

+

NOTE: Sqoop 2 currently does not support extract and load between two connectors that represent schema-less data sources. We expect that atleast the From instance of the connector or the To instance of the connector in the sqoop job will have a schema. If both From and To have a associated non empty schema, Sqoop 2 will load data by column name, i.e, data in column “A” in From instance of the connector for the job will be loaded to column “A” in the To instance of the connector for that job.

+

Destroyer is instantiated after the execution engine finishes its processing. It is the last step in the sqoop job lifecyle, so pending clean up tasks such as dropping temporary tables and closing connections. The term destroyer is a little misleading. It represents the phase where the final output commits to the data source can also happen in case of the TO instance of the connector code.

+
+
+

Partitioner

+

The Partitioner creates Partition instances ranging from 1..N. The N is driven by a configuration as well. The default set of partitions created is set to 10 in the sqoop code. Here is the Partitioner API

+

Partitioner must implement the getPartitions method in the Partitioner API.

+
public abstract List<Partition> getPartitions(PartitionerContext context,
+    LinkConfiguration linkConfiguration, FromJobConfiguration jobConfiguration);
+
+
+

Partition instances are passed to Extractor as the argument of extract method. +Extractor determines which portion of the data to extract by a given partition.

+

There is no actual convention for Partition classes other than being actually Writable and toString() -able. Here is the Partition API

+
public abstract class Partition {
+  public abstract void readFields(DataInput in) throws IOException;
+  public abstract void write(DataOutput out) throws IOException;
+  public abstract String toString();
+}
+
+
+

Connectors can implement custom Partition classes. GenericJdbcPartitioner is one such example. It returns the GenericJdbcPartition objects.

+
+
+

Extractor

+

Extractor (E for ETL) extracts data from a given data source +Extractor must implement the extract method in the Extractor API.

+
public abstract void extract(ExtractorContext context,
+                             LinkConfiguration linkConfiguration,
+                             JobConfiguration jobConfiguration,
+                             SqoopPartition partition);
+
+
+

The extract method extracts data from the data source using the link and job configuration properties and writes it to the DataWriter (provided by the extractor context) as the default Intermediate representation .

+

Extractors use Writer’s provided by the ExtractorContext to send a record through the sqoop system.

+
context.getDataWriter().writeArrayRecord(array);
+
+
+

The extractor must iterate through the given partition in the extract method.

+
while (resultSet.next()) {
+  ...
+  context.getDataWriter().writeArrayRecord(array);
+  ...
+}
+
+
+
+
+
+

To

+

The getTo method returns TO instance which is a Transferable entity that encapsulates the operations needed to wtite data to the data source the connector represents. The built-in GenericJdbcConnector defines To like this.

+
private static final To TO = new To(
+      GenericJdbcToInitializer.class,
+      GenericJdbcLoader.class,
+      GenericJdbcToDestroyer.class);
+...
+
+@Override
+public To getTo() {
+  return TO;
+}
+
+
+
+

Initializer and Destroyer

+

Initializer and Destroyer of a To instance are used in a similar way to those of a From instance. +Refer to the previous section for more details.

+
+
+

Loader

+

A loader (L for ETL) receives data from the From instance of the sqoop connector associated with the sqoop job and then loads it to an TO instance of the connector associated with the same sqoop job

+

Loader must implement load method of the Loader API

+
public abstract void load(LoaderContext context,
+                          ConnectionConfiguration connectionConfiguration,
+                          JobConfiguration jobConfiguration) throws Exception;
+
+
+

The load method reads data from DataReader (provided by context) in the default Intermediate representation and loads it to data source.

+

Loader must iterate in the load method until the data from DataReader is exhausted.

+
while ((array = context.getDataReader().readArrayRecord()) != null) {
+  ...
+}
+
+
+

NOTE: we do not yet support a stage for connector developers to control how to balance the loading/writitng of data across the mutiple loaders. In future we may be adding this to the connector API to have custom logic to balance the loading across multiple reducers.

+
+
+
+
+

Configurables

+
+

Configurable registration

+

One of the currently supported configurable in Sqoop are the connectors. Sqoop 2 registers definitions of connectors from the file named sqoopconnector.properties which each connector implementation should provide to become available in Sqoop.

+
# Generic JDBC Connector Properties
+org.apache.sqoop.connector.class = org.apache.sqoop.connector.jdbc.GenericJdbcConnector
+org.apache.sqoop.connector.name = generic-jdbc-connector
+
+
+
+
+

Configurations

+

Implementations of SqoopConnector overrides methods such as getLinkConfigurationClass and getJobConfigurationClass returning configuration class.

+
@Override
+public Class getLinkConfigurationClass() {
+  return LinkConfiguration.class;
+}
+
+@Override
+public Class getJobConfigurationClass(Direction direction) {
+  switch (direction) {
+    case FROM:
+      return FromJobConfiguration.class;
+    case TO:
+      return ToJobConfiguration.class;
+    default:
+      return null;
+  }
+}
+
+
+

Configurations are represented by annotations defined in org.apache.sqoop.model package. +Annotations such as ConfigurationClass , ConfigClass , Config and Input +are provided for defining configuration objects for each connector.

+

@ConfigurationClass is a marker annotation for ConfigurationClasses that hold a group or lis of ConfigClasses annotated with the marker @ConfigClass

+
@ConfigurationClass
+public class LinkConfiguration {
+
+  @Config public LinkConfig linkConfig;
+
+  public LinkConfiguration() {
+    linkConfig = new LinkConfig();
+  }
+}
+
+
+

Each ConfigClass defines the different inputs it exposes for the link and job configs. These inputs are annotated with @Input and the user will be asked to fill in when they create a sqoop job and choose to use this instance of the connector for either the From or To part of the job.

+
@ConfigClass(validators = {@Validator(LinkConfig.ConfigValidator.class)})
+public class LinkConfig {
+  @Input(size = 128, validators = {@Validator(NotEmpty.class), @Validator(ClassAvailable.class)} )
+  @Input(size = 128) public String jdbcDriver;
+  @Input(size = 128) public String connectionString;
+  @Input(size = 40)  public String username;
+  @Input(size = 40, sensitive = true) public String password;
+  @Input public Map<String, String> jdbcProperties;
+}
+
+
+

Each ConfigClass and the inputs within the configs annotated with Input can specifiy validators via the @Validator annotation described below.

+
+

Empty Configuration

+

If a connector does not have any configuration inputs to specify for the ConfigType.LINK or ConfigType.JOB it is recommended to return the EmptyConfiguration class in the getLinkConfigurationClass() or getJobConfigurationClass(..) methods.

+
@ConfigurationClass
+public class EmptyConfiguration { }
+
+
+
+
+
+

Configuration ResourceBundle

+

The config and its corresponding input names, the input field description are represented in the config resource bundle defined per connector.

+
# jdbc driver
+connection.jdbcDriver.label = JDBC Driver Class
+connection.jdbcDriver.help = Enter the fully qualified class name of the JDBC \
+                   driver that will be used for establishing this connection.
+
+# connect string
+connection.connectionString.label = JDBC Connection String
+connection.connectionString.help = Enter the value of JDBC connection string to be \
+                   used by this connector for creating connections.
+
+...
+
+
+

Those resources are loaded by getBundle method of the SqoopConnector.

+
@Override
+public ResourceBundle getBundle(Locale locale) {
+  return ResourceBundle.getBundle(
+  GenericJdbcConnectorConstants.RESOURCE_BUNDLE_NAME, locale);
+}
+
+
+
+
+

Validations for Configs and Inputs

+

Validators validate the config objects and the inputs associated with the config objects. For config objects themselves we encourage developers to write custom valdiators for both the link and job config types.

+
@Input(size = 128, validators = {@Validator(value = StartsWith.class, strArg = "jdbc:")} )
+
+@Input(size = 255, validators = { @Validator(NotEmpty.class) })
+
+
+

Sqoop 2 provides a list of standard input validators that can be used by different connectors for the link and job type configuration inputs.

+
public class NotEmpty extends AbstractValidator<String> {
+@Override
+public void validate(String instance) {
+  if (instance == null || instance.isEmpty()) {
+   addMessage(Status.ERROR, "Can't be null nor empty");
+  }
+ }
+}
+
+
+

The validation logic is executed when users creating the sqoop jobs input values for the link and job configs associated with the From and To instances of the connectors associated with the job.

+
+
+
+

Sqoop 2 MapReduce Job Execution Lifecycle with Connector API

+

Sqoop 2 provides MapReduce utilities such as SqoopMapper and SqoopReducer that aid sqoop job execution.

+

Note: Any class prefixed with Sqoop is a internal sqoop class provided for MapReduce and is not part of the conenector API. These internal classes work with the custom implementations of Extractor, Partitioner in the From instance and Loader in the To instance of the connector.

+

When reading from a data source, the Extractor provided by the From instance of the connector extracts data from a corresponding data source it represents and the Loader, provided by the TO instance of the connector, loads data into the data source it represents.

+

The diagram below describes the initialization phase of a job. +SqoopInputFormat create splits using Partitioner.

+
    ,----------------.          ,-----------.
+    |SqoopInputFormat|          |Partitioner|
+    `-------+--------'          `-----+-----'
+ getSplits  |                         |
+----------->|                         |
+            |      getPartitions      |
+            |------------------------>|
+            |                         |         ,---------.
+            |                         |-------> |Partition|
+            |                         |         `----+----'
+            |<- - - - - - - - - - - - |              |
+            |                         |              |          ,----------.
+            |-------------------------------------------------->|SqoopSplit|
+            |                         |              |          `----+-----'
+
+
+

The diagram below describes the map phase of a job. +SqoopMapper invokes From connector’s extractor’s extract method.

+
    ,-----------.
+    |SqoopMapper|
+    `-----+-----'
+   run    |
+--------->|                                   ,------------------.
+          |---------------------------------->|SqoopMapDataWriter|
+          |                                   `------+-----------'
+          |                ,---------.               |
+          |--------------> |Extractor|               |
+          |                `----+----'               |
+          |      extract        |                    |
+          |-------------------->|                    |
+          |                     |                    |
+         read from DB           |                    |
+<-------------------------------|      write*        |
+          |                     |------------------->|
+          |                     |                    |           ,----.
+          |                     |                    |---------->|Data|
+          |                     |                    |           `-+--'
+          |                     |                    |
+          |                     |                    |      context.write
+          |                     |                    |-------------------------->
+
+
+

The diagram below decribes the reduce phase of a job. +OutputFormat invokes To connector’s loader’s load method (via SqoopOutputFormatLoadExecutor ).

+
  ,------------.  ,---------------------.
+  |SqoopReducer|  |SqoopNullOutputFormat|
+  `---+--------'  `----------+----------'
+      |                 |   ,-----------------------------.
+      |                 |-> |SqoopOutputFormatLoadExecutor|
+      |                 |   `--------------+--------------'        ,----.
+      |                 |                  |---------------------> |Data|
+      |                 |                  |                       `-+--'
+      |                 |                  |   ,-----------------.   |
+      |                 |                  |-> |SqoopRecordWriter|   |
+    getRecordWriter     |                  |   `--------+--------'   |
+----------------------->| getRecordWriter  |            |            |
+      |                 |----------------->|            |            |     ,--------------.
+      |                 |                  |-----------------------------> |ConsumerThread|
+      |                 |                  |            |            |     `------+-------'
+      |                 |<- - - - - - - - -|            |            |            |    ,------.
+<- - - - - - - - - - - -|                  |            |            |            |--->|Loader|
+      |                 |                  |            |            |            |    `--+---'
+      |                 |                  |            |            |            |       |
+      |                 |                  |            |            |            | load  |
+ run  |                 |                  |            |            |            |------>|
+----->|                 |     write        |            |            |            |       |
+      |------------------------------------------------>| setContent |            | read* |
+      |                 |                  |            |----------->| getContent |<------|
+      |                 |                  |            |            |<-----------|       |
+      |                 |                  |            |            |            | - - ->|
+      |                 |                  |            |            |            |       | write into DB
+      |                 |                  |            |            |            |       |-------------->
+
+
+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/DevEnv.html =================================================================== --- content/resources/docs/1.99.4/DevEnv.html (revision 0) +++ content/resources/docs/1.99.4/DevEnv.html (working copy) @@ -0,0 +1,94 @@ + + + + + + + + + + Sqoop 2 Development Environment Setup — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Sqoop 2 Development Environment Setup

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Sqoop 2 Development Environment Setup

+

This document describes you how to setup development environment for Sqoop 2.

+
+

System Requirement

+
+

Java

+

Sqoop written in Java and using version 1.6. You can download java and install. Locate JAVA_HOME to installed directroy, e.g. export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_32.

+
+
+

Maven

+

Sqoop uses Maven 3 for building the project. Download Maven and its Installation instructions given in link.

+
+
+
+

Eclipse Setup

+

Steps for downloading source code is given in Building Sqoop2

+

Sqoop 2 project has multiple modules where one module is depend on another module for e.g. sqoop 2 client module has sqoop 2 common module dependency. Follow below step for creating eclipse’s project and classpath for each module.

+
//Install all package into local maven repository
+mvn clean install -DskipTests
+
+//Adding M2_REPO variable to eclipse workspace
+mvn eclipse:configure-workspace -Declipse.workspace=<path-to-eclipse-workspace-dir-for-sqoop-2>
+
+//Eclipse project creation with optional parameters
+mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
+
+
+

Alternatively, for manually adding M2_REPO classpath variable as maven repository path in eclipse-> window-> Java ->Classpath Variables ->Click “New” ->In new dialog box, input Name as M2_REPO and Path as $HOME/.m2/repository ->click Ok.

+

On successful execution of above maven commands, Then import the sqoop project modules into eclipse-> File -> Import ->General ->Existing Projects into Workspace-> Click Next-> Browse Sqoop 2 directory ($HOME/git/sqoop2) ->Click Ok ->Import dialog shows multiple projects (sqoop-client, sqoop-common, etc.) -> Select all modules -> click Finish.

+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/Installation.html =================================================================== --- content/resources/docs/1.99.4/Installation.html (revision 0) +++ content/resources/docs/1.99.4/Installation.html (working copy) @@ -0,0 +1,134 @@ + + + + + + + + + + Installation — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Installation

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Installation

+

Sqoop ships as one binary package however it’s compound from two separate parts - client and server. You need to install server on single node in your cluster. This node will then serve as an entry point for all connecting Sqoop clients. Server acts as a mapreduce client and therefore Hadoop must be installed and configured on machine hosting Sqoop server. Clients can be installed on any arbitrary number of machines. Client is not acting as a mapreduce client and thus you do not need to install Hadoop on nodes that will act only as a Sqoop client.

+
+

Server installation

+

Copy Sqoop artifact on machine where you want to run Sqoop server. This machine must have installed and configured Hadoop. You don’t need to run any Hadoop related services there, however the machine must be able to act as an Hadoop client. You should be able to list a HDFS for example:

+
hadoop dfs -ls
+
+
+

Sqoop server supports multiple Hadoop versions. However as Hadoop major versions are not compatible with each other, Sqoop have multiple binary artefacts - one for each supported major version of Hadoop. You need to make sure that you’re using appropriated binary artifact for your specific Hadoop version. To install Sqoop server decompress appropriate distribution artifact in location at your convenience and change your working directory to this folder.

+
# Decompress Sqoop distribution tarball
+tar -xvf sqoop-<version>-bin-hadoop<hadoop-version>.tar.gz
+
+# Move decompressed content to any location
+mv sqoop-<version>-bin-hadoop<hadoop version>.tar.gz /usr/lib/sqoop
+
+# Change working directory
+cd /usr/lib/sqoop
+
+
+
+

Installing Dependencies

+

Hadoop libraries must be available on node where you are planning to run Sqoop server with proper configuration for major services - NameNode and either JobTracker or ResourceManager depending whether you are running Hadoop 1 or 2. There is no need to run any Hadoop service on the same node as Sqoop server, just the libraries and configuration files must be available.

+

Path to Hadoop libraries is stored in file catalina.properties inside directory server/conf. You need to change property called common.loader to contain all directories with your Hadoop libraries. The default expected locations are /usr/lib/hadoop and /usr/lib/hadoop/lib/. Please check out the comments in the file for further description how to configure different locations.

+

Lastly you might need to install JDBC drivers that are not bundled with Sqoop because of incompatible licenses. You can add any arbitrary Java jar file to Sqoop server by copying it into lib/ directory. You can create this directory if it do not exists already.

+
+
+

Configuring PATH

+

All user and administrator facing shell commands are stored in bin/ directory. It’s recommended to add this directory to your $PATH for their easier execution, for example:

+
PATH=$PATH:`pwd`/bin/
+
+
+

Further documentation pages will assume that you have the binaries on your $PATH. You will need to call them specifying full path if you decide to skip this step.

+
+
+

Configuring Server

+

Before starting server you should revise configuration to match your specific environment. Server configuration files are stored in server/config directory of distributed artifact along side with other configuration files of Tomcat.

+

File sqoop_bootstrap.properties specifies which configuration provider should be used for loading configuration for rest of Sqoop server. Default value PropertiesConfigurationProvider should be sufficient.

+

Second configuration file sqoop.properties contains remaining configuration properties that can affect Sqoop server. File is very well documented, so check if all configuration properties fits your environment. Default or very little tweaking should be sufficient most common cases.

+

You can verify the Sqoop server configuration using Verify Tool, for example:

+
sqoop2-tool verify
+
+
+

Upon running the verify tool, you should see messages similar to the following:

+
Verification was successful.
+Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+
+

Consult Verify Tool documentation page in case of any failure.

+
+
+

Server Life Cycle

+

After installation and configuration you can start Sqoop server with following command:

+
sqoop2-server start
+
+
+

Similarly you can stop server using following command:

+
sqoop2-server stop
+
+
+

By default Sqoop server daemons use ports 12000 and 12001. You can set SQOOP_HTTP_PORT and SQOOP_ADMIN_PORT in configuration file server/bin/setenv.sh to use different ports.

+
+
+
+

Client installation

+

Client do not need extra installation and configuration steps. Just copy Sqoop distribution artifact on target machine and unzip it in desired location. You can start client with following command:

+
sqoop2-shell
+
+
+

You can find more documentation to Sqoop client in Command Line Client section.

+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/RESTAPI.html =================================================================== --- content/resources/docs/1.99.4/RESTAPI.html (revision 0) +++ content/resources/docs/1.99.4/RESTAPI.html (working copy) @@ -0,0 +1,1657 @@ + + + + + + + + + + Sqoop REST API Guide — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Sqoop REST API Guide

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Sqoop REST API Guide

+

This document will explain how you can use Sqoop REST API to build applications interacting with Sqoop server. +The REST API covers all aspects of managing Sqoop jobs and allows you to build an app in any programming language using HTTP over JSON.

+
+

Table of Contents

+ +
+
+

Initialization

+

Before continuing further, make sure that the Sqoop server is running.

+

Then find out the details of the Sqoop server: host, port and webapp, and keep them in mind. Note that the sqoop server is running on Apache Tomcat. To exercise a REST API for Sqoop, you could assemble and send a HTTP request to an url corresponding to that API. Generally, the url contains the host on which the sqoop server is running, the port at which the sqoop server is listening to and webapp, the context path at which the Sqoop server is registered in the Apache Tomcat engine.

+

Certain requests might need to contain some additional query parameters and post data. These parameters could be given via +the HTTP headers, request body or both. All the content in the HTTP body is in JSON format.

+
+ +
+

Objects

+

This section covers all the objects that might exist in an API request and/or API response.

+
+

Configs and Inputs

+

Before creating any link for a connector or a job with associated From and To links, the first thing to do is getting familiar with all the configurations that the connector exposes.

+

Each config consists of the following information

+ ++++ + + + + + + + + + + + + + + + + + + + +
FieldDescription
idThe id of this config
inputsA array of inputs of this config
nameThe unique name of this config per connector
typeThe type of this config (LINK/ JOB)
+

A typical config object is showing below:

+
 {
+  id:7,
+  inputs:[
+    {
+       id: 25,
+       name: "throttlingConfig.numExtractors",
+       type: "INTEGER",
+       sensitive: false
+    },
+    {
+       id: 26,
+       name: "throttlingConfig.numLoaders",
+       type: "INTEGER",
+       sensitive: false
+     }
+  ],
+  name: "throttlingConfig",
+  type: "JOB"
+}
+
+
+

Each input object in a config is structured below:

+ ++++ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
idThe id of this input
nameThe unique name of this input per config
typeThe data type of this input field
sizeThe length of this input field
sensitiveWhether this input contain sensitive information
+

To send a filled config in the request, you should always use config id and input id to map the values to their correspondig names. +For example, the following request contains an input value com.mysql.jdbc.Driver with input id 7 inside a config with id 4 that belongs to a link with id 3

+
link: {
+      id: 3,
+      enabled: true,
+      link-config-values: [{
+          id: 4,
+          inputs: [{
+              id: 7,
+              name: "linkConfig.jdbcDriver",
+              value: "com.mysql.jdbc.Driver",
+              type: "STRING",
+              size: 128,
+              sensitive: false
+          }, {
+              id: 8,
+              name: "linkConfig.connectionString",
+              value: "jdbc%3Amysql%3A%2F%2Fmysql.ent.cloudera.com%2Fsqoop",
+              type: "STRING",
+              size: 128,
+              sensitive: false
+          },
+          ...
+       }
+     }
+
+
+
+
+

Exception Response

+

Each operation on Sqoop server might return an exception in the Http response. Remember to take this into account.The exception code and message could be found in both the header and body of the response.

+

Please jump to “Header Parameters” section to find how to get exception information from header.

+

In the body, the exception is expressed in JSON format. An example of the exception is:

+
{
+  "message":"DERBYREPO_0030:Unable to load specific job metadata from repository - Couldn't find job with id 2",
+  "stack-trace":[
+    {
+      "file":"DerbyRepositoryHandler.java",
+      "line":1111,
+      "class":"org.apache.sqoop.repository.derby.DerbyRepositoryHandler",
+      "method":"findJob"
+    },
+    {
+      "file":"JdbcRepository.java",
+      "line":451,
+      "class":"org.apache.sqoop.repository.JdbcRepository$16",
+      "method":"doIt"
+    },
+    {
+      "file":"JdbcRepository.java",
+      "line":90,
+      "class":"org.apache.sqoop.repository.JdbcRepository",
+      "method":"doWithConnection"
+    },
+    {
+      "file":"JdbcRepository.java",
+      "line":61,
+      "class":"org.apache.sqoop.repository.JdbcRepository",
+      "method":"doWithConnection"
+    },
+    {
+      "file":"JdbcRepository.java",
+      "line":448,
+      "class":"org.apache.sqoop.repository.JdbcRepository",
+      "method":"findJob"
+    },
+    {
+      "file":"JobRequestHandler.java",
+      "line":238,
+      "class":"org.apache.sqoop.handler.JobRequestHandler",
+      "method":"getJobs"
+    }
+  ],
+  "class":"org.apache.sqoop.common.SqoopException"
+}
+
+
+
+
+

Config and Input Validation Status Response

+

The config and the inputs associated with the connectors also provide custom validation rules for the values given to these input fields. Sqoop applies these custom validators and its corresponding valdation logic when config values for the LINK and JOB are posted.

+

An example of a OK status with the persisted ID:

+
{
+   "id": 3,
+   "validation-result": [
+       {}
+   ]
+}
+
+
+

An example of ERROR status:

+
{
+  "validation-result": [
+    {
+     "linkConfig": [
+       {
+         "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+         "status": "ERROR"
+       }
+     ]
+   }
+  ]
+}
+
+
+
+
+

Job Submission Status Response

+

After starting a job, you could look up the running status of it. There could be 7 possible status:

+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StatusDescription
BOOTINGIn the middle of submitting the job
FAILURE_ON_SUBMITUnable to submit this job to remote cluster
RUNNINGThe job is running now
SUCCEEDEDJob finished successfully
FAILEDJob failed
NEVER_EXECUTEDThe job has never been executed since created
UNKNOWNThe status is unknown
+
+
+
+

Header Parameters

+

For all Sqoop requests, the following header parameters are supported:

+ +++++ + + + + + + + + + + + + +
ParameterRequiredDescription
sqoop-user-nametrueThe name of the user who makes the requests
+

For all the responses, the following parameters in the HTTP message header are available:

+ +++++ + + + + + + + + + + + + + + + + +
ParameterRequiredDescription
sqoop-error-codefalseThe error code when some error happen in the server side for this request
sqoop-error-messagefalseThe explanation for a error code
+

So far, there are only these 2 parameters in the header of response message. They only exist when something bad happen in the server. +And they always come along with an exception message in the response body.

+
+
+

REST APIs

+

The section elaborates all the rest apis that are supported by the Sqoop server.

+
+

/version - [GET] - Get Sqoop Version

+

Get all the version metadata of Sqoop software in the server side.

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Fields of Response:
  • +
+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
source-revisionThe revision number of Sqoop source code
api-versionsThe version of network protocol
build-dateThe Sqoop release date
userThe user who made the release
source-urlThe url of the source code trunk
build-versionThe version of Sqoop in the server side
+
    +
  • Response Example:
  • +
+
{
+ source-url: "git://vbasavaraj.local/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/common",
+ source-revision: "418c5f637c3f09b94ea7fc3b0a4610831373a25f",
+ build-version: "2.0.0-SNAPSHOT",
+ api-versions: [
+    "v1"
+  ],
+ user: "vbasavaraj",
+ build-date: "Mon Nov 3 08:18:21 PST 2014"
+}
+
+
+
+
+

/v1/connectors - [GET] Get all Connectors

+

Get all the connectors registered in Sqoop

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Example
  • +
+
{
+  connectors: [{
+      id: 1,
+      link-config: [],
+      job-config: {},
+      name: "hdfs-connector",
+      class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+      all-config-resources: {},
+      version: "2.0.0-SNAPSHOT"
+  }, {
+      id: 2,
+      link-config: [],
+      job-config: {},
+      name: "generic-jdbc-connector",
+      class: "org.apache.sqoop.connector.jdbc.GenericJdbcConnector",
+      all-config - resources: {},
+      version: "2.0.0-SNAPSHOT"
+  }]
+}
+
+
+
+
+

/v1/connector/[cname] or /v1/connector/[cid] - [GET] - Get Connector

+

Provide the id or unique name of the connector in the url [cid] or [cname] part.

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Fields of Response:
  • +
+ ++++ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
idThe id for the connector ( registered as a configurable )
job-configConnector job config and inputs for both FROM and TO
link-configConnector link config and inputs
all-config-resourcesAll config inputs labels and description for the given connector
versionThe build version required for config and input data upgrades
+
    +
  • Response Example:
  • +
+
{
+ connector: {
+     id: 1,
+     job-config: {
+         TO: [{
+             id: 3,
+             inputs: [{
+                 id: 3,
+                 values: "TEXT_FILE,SEQUENCE_FILE",
+                 name: "toJobConfig.outputFormat",
+                 type: "ENUM",
+                 sensitive: false
+             }, {
+                 id: 4,
+                 values: "NONE,DEFAULT,DEFLATE,GZIP,BZIP2,LZO,LZ4,SNAPPY,CUSTOM",
+                 name: "toJobConfig.compression",
+                 type: "ENUM",
+                 sensitive: false
+             }, {
+                 id: 5,
+                 name: "toJobConfig.customCompression",
+                 type: "STRING",
+                 size: 255,
+                 sensitive: false
+             }, {
+                 id: 6,
+                 name: "toJobConfig.outputDirectory",
+                 type: "STRING",
+                 size: 255,
+                 sensitive: false
+             }],
+             name: "toJobConfig",
+             type: "JOB"
+         }],
+         FROM: [{
+             id: 2,
+             inputs: [{
+                 id: 2,
+                 name: "fromJobConfig.inputDirectory",
+                 type: "STRING",
+                 size: 255,
+                 sensitive: false
+             }],
+             name: "fromJobConfig",
+             type: "JOB"
+         }]
+     },
+     link-config: [{
+         id: 1,
+         inputs: [{
+             id: 1,
+             name: "linkConfig.uri",
+             type: "STRING",
+             size: 255,
+             sensitive: false
+         }],
+         name: "linkConfig",
+         type: "LINK"
+     }],
+     name: "hdfs-connector",
+     class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+     all-config-resources: {
+         fromJobConfig.label: "From Job configuration",
+             toJobConfig.ignored.label: "Ignored",
+             fromJobConfig.help: "Specifies information required to get data from Hadoop ecosystem",
+             toJobConfig.ignored.help: "This value is ignored",
+             toJobConfig.label: "ToJob configuration",
+             toJobConfig.storageType.label: "Storage type",
+             fromJobConfig.inputDirectory.label: "Input directory",
+             toJobConfig.outputFormat.label: "Output format",
+             toJobConfig.outputDirectory.label: "Output directory",
+             toJobConfig.outputDirectory.help: "Output directory for final data",
+             toJobConfig.compression.help: "Compression that should be used for the data",
+             toJobConfig.outputFormat.help: "Format in which data should be serialized",
+             toJobConfig.customCompression.label: "Custom compression format",
+             toJobConfig.compression.label: "Compression format",
+             linkConfig.label: "Link configuration",
+             toJobConfig.customCompression.help: "Full class name of the custom compression",
+             toJobConfig.storageType.help: "Target on Hadoop ecosystem where to store data",
+             linkConfig.help: "Here you supply information necessary to connect to HDFS",
+             linkConfig.uri.help: "HDFS URI used to connect to HDFS",
+             linkConfig.uri.label: "HDFS URI",
+             fromJobConfig.inputDirectory.help: "Directory that should be exported",
+             toJobConfig.help: "You must supply the information requested in order to get information where you want to store your data."
+     },
+     version: "2.0.0-SNAPSHOT"
+  }
+}
+
+
+
+
+

/v1/driver - [GET]- Get Sqoop Driver

+

Driver exposes configurations required for the job execution.

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Fields of Response:
  • +
+ ++++ + + + + + + + + + + + + + + + + + + + +
FieldDescription
idThe id for the driver ( registered as a configurable )
job-configDriver job config and inputs
versionThe build version of the driver
all-config-resourcesDriver exposed config and input labels and description
+
    +
  • Response Example:
  • +
+
{
+   id: 3,
+   job-config: [{
+       id: 7,
+       inputs: [{
+           id: 25,
+           name: "throttlingConfig.numExtractors",
+           type: "INTEGER",
+           sensitive: false
+       }, {
+           id: 26,
+           name: "throttlingConfig.numLoaders",
+           type: "INTEGER",
+           sensitive: false
+       }],
+       name: "throttlingConfig",
+       type: "JOB"
+   }],
+   all-config-resources: {
+       throttlingConfig.numExtractors.label: "Extractors",
+           throttlingConfig.numLoaders.help: "Number of loaders that Sqoop will use",
+           throttlingConfig.numLoaders.label: "Loaders",
+           throttlingConfig.label: "Throttling resources",
+           throttlingConfig.numExtractors.help: "Number of extractors that Sqoop will use",
+           throttlingConfig.help: "Set throttling boundaries to not overload your systems"
+   },
+   version: "1"
+}
+
+
+
+ + + +
+

/v1/link - [POST] - Create Link

+

Create a new link object. Provide values to the link config inputs for the ones that are required.

+
    +
  • Method: POST
  • +
  • Format: JSON
  • +
  • Fields of Request:
  • +
+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
linkThe root of the post data in JSON
idThe id of the link can be left blank in the post data
enabledWhether to enable this link (true/false)
update-dateThe last updated time of this link
creation-dateThe creation time of this link
update-userThe user who updated this link
creation-userThe user who created this link
nameThe name of this link
link-config-valuesConfig input values for link config for the corresponding connector
connector-idThe id of the connector used for this link
+
    +
  • Request Example:
  • +
+
{
+  link: {
+      id: -1,
+      enabled: true,
+      link-config-values: [{
+          id: 1,
+          inputs: [{
+              id: 1,
+              name: "linkConfig.uri",
+              value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+              type: "STRING",
+              size: 255,
+              sensitive: false
+          }],
+          name: "testInput",
+          type: "LINK"
+      }],
+      update-user: "root",
+      name: "testLink",
+      creation-date: 1415202223048,
+      connector-id: 1,
+      update-date: 1415202223048,
+      creation-user: "root"
+  }
+}
+
+
+
    +
  • Fields of Response:
  • +
+ ++++ + + + + + + + + + + + + + +
FieldDescription
idThe id assigned for this new created link
validation-resultThe validation status for the link config inputs given in the post data
+
    +
  • ERROR Response Example:
  • +
+
{
+  "validation-result": [
+      {
+          "linkConfig": [
+              {
+                  "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                  "status": "ERROR"
+              }
+          ]
+      }
+  ]
+}
+
+
+
+ + + + +
+

/v1/jobs/ - [GET] Get all jobs

+

Get all the jobs created in Sqoop

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Example:
  • +
+
{
+   jobs: [{
+      driver-config-values: [],
+          enabled: true,
+          from-connector-id: 1,
+          update-user: "root",
+          to-config-values: [],
+          to-connector-id: 2,
+          creation-date: 1415310157618,
+          update-date: 1415310157618,
+          creation-user: "root",
+          id: 1,
+          to-link-id: 2,
+          from-config-values: [],
+          name: "First Job",
+          from-link-id: 1
+     },{
+      driver-config-values: [],
+          enabled: true,
+          from-connector-id: 2,
+          update-user: "root",
+          to-config-values: [],
+          to-connector-id: 1,
+          creation-date: 1415310650600,
+          update-date: 1415310650600,
+          creation-user: "root",
+          id: 2,
+          to-link-id: 1,
+          from-config-values: [],
+          name: "Second Job",
+          from-link-id: 2
+     }]
+}
+
+
+
+
+

/v1/jobs?cname=[cname] - [GET] Get all jobs by connector

+

Get all the jobs for a given connector identified by [cname] part.

+
+
+

/v1/job/[jname] or /v1/job/[jid] - [GET] - Get Job

+

Provide the name or the id of the job in the url [jname] +part or [jid] part.

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Example:
  • +
+
 {
+   job: {
+       driver-config-values: [{
+               id: 7,
+               inputs: [{
+                   id: 25,
+                   name: "throttlingConfig.numExtractors",
+                   value: "3",
+                   type: "INTEGER",
+                   sensitive: false
+               }, {
+                   id: 26,
+                   name: "throttlingConfig.numLoaders",
+                   value: "3",
+                   type: "INTEGER",
+                   sensitive: false
+               }],
+               name: "throttlingConfig",
+               type: "JOB"
+           }],
+           enabled: true,
+           from-connector-id: 1,
+           update-user: "root",
+           to-config-values: [{
+               id: 6,
+               inputs: [{
+                   id: 19,
+                   name: "toJobConfig.schemaName",
+                   type: "STRING",
+                   size: 50,
+                   sensitive: false
+               }, {
+                   id: 20,
+                   name: "toJobConfig.tableName",
+                   value: "text",
+                   type: "STRING",
+                   size: 2000,
+                   sensitive: false
+               }, {
+                   id: 21,
+                   name: "toJobConfig.sql",
+                   type: "STRING",
+                   size: 50,
+                   sensitive: false
+               }, {
+                   id: 22,
+                   name: "toJobConfig.columns",
+                   type: "STRING",
+                   size: 50,
+                   sensitive: false
+               }, {
+                   id: 23,
+                   name: "toJobConfig.stageTableName",
+                   type: "STRING",
+                   size: 2000,
+                   sensitive: false
+               }, {
+                   id: 24,
+                   name: "toJobConfig.shouldClearStageTable",
+                   type: "BOOLEAN",
+                   sensitive: false
+               }],
+               name: "toJobConfig",
+               type: "JOB"
+           }],
+           to-connector-id: 2,
+           creation-date: 1415310157618,
+           update-date: 1415310157618,
+           creation-user: "root",
+           id: 1,
+           to-link-id: 2,
+           from-config-values: [{
+               id: 2,
+               inputs: [{
+                   id: 2,
+                   name: "fromJobConfig.inputDirectory",
+                   value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                   type: "STRING",
+                   size: 255,
+                   sensitive: false
+               }],
+               name: "fromJobConfig",
+               type: "JOB"
+           }],
+           name: "First Job",
+           from-link- id: 1
+   }
+}
+
+
+
+
+

/v1/job - [POST] - Create Job

+

Create a new job object with the corresponding config values.

+
    +
  • Method: POST
  • +
  • Format: JSON
  • +
  • Fields of Request:
  • +
+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
jobThe root of the post data in JSON
from-link-idThe id of the from link for the job
to-link-idThe id of the to link for the job
idThe id of the link can be left blank in the post data
enabledWhether to enable this job (true/false)
update-dateThe last updated time of this job
creation-dateThe creation time of this job
update-userThe user who updated this job
creation-userThe uset who creates this job
nameThe name of this job
from-config-valuesConfig input values for FROM part of the job
to-config-valuesConfig input values for TO part of the job
driver-config-valuesConfig input values for driver
connector-idThe id of the connector used for this link
+
    +
  • Request Example:
  • +
+
{
+  job: {
+    driver-config-values: [
+      {
+        id: 7,
+        inputs: [
+          {
+            id: 25,
+            name: "throttlingConfig.numExtractors",
+            value: "3",
+            type: "INTEGER",
+            sensitive: false
+          },
+          {
+            id: 26,
+            name: "throttlingConfig.numLoaders",
+            value: "3",
+            type: "INTEGER",
+            sensitive: false
+          }
+        ],
+        name: "throttlingConfig",
+        type: "JOB"
+      }
+    ],
+    enabled: true,
+    from-connector-id: 1,
+    update-user: "root",
+    to-config-values: [
+      {
+        id: 6,
+        inputs: [
+          {
+            id: 19,
+            name: "toJobConfig.schemaName",
+            type: "STRING",
+            size: 50,
+            sensitive: false
+          },
+          {
+            id: 20,
+            name: "toJobConfig.tableName",
+            value: "text",
+            type: "STRING",
+            size: 2000,
+            sensitive: false
+          },
+          {
+            id: 21,
+            name: "toJobConfig.sql",
+            type: "STRING",
+            size: 50,
+            sensitive: false
+          },
+          {
+            id: 22,
+            name: "toJobConfig.columns",
+            type: "STRING",
+            size: 50,
+            sensitive: false
+          },
+          {
+            id: 23,
+            name: "toJobConfig.stageTableName",
+            type: "STRING",
+            size: 2000,
+            sensitive: false
+          },
+          {
+            id: 24,
+            name: "toJobConfig.shouldClearStageTable",
+            type: "BOOLEAN",
+            sensitive: false
+          }
+        ],
+        name: "toJobConfig",
+        type: "JOB"
+      }
+    ],
+    to-connector-id: 2,
+    creation-date: 1415310157618,
+    update-date: 1415310157618,
+    creation-user: "root",
+    id: -1,
+    to-link-id: 2,
+    from-config-values: [
+      {
+        id: 2,
+        inputs: [
+          {
+            id: 2,
+            name: "fromJobConfig.inputDirectory",
+            value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+            type: "STRING",
+            size: 255,
+            sensitive: false
+          }
+        ],
+        name: "fromJobConfig",
+        type: "JOB"
+      }
+    ],
+    name: "Test Job",
+    from-link-id: 1
+   }
+ }
+
+
+
    +
  • Fields of Response:
  • +
+ +++++ + + + + + + + + + + + +
FieldDescription
id | The id assigned for this new created job
validation-result | The validation status for the job config and driver config inputs in the post data
+
    +
  • ERROR Response Example:
  • +
+
{
+  "validation-result": [
+      {
+          "linkConfig": [
+              {
+                  "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                  "status": "ERROR"
+              }
+          ]
+      }
+  ]
+}
+
+
+
+
+

/v1/job/[jid] - [PUT] - Update Job

+

Update an existing job object with id [jid]. To make the procedure of filling inputs easier, the general practice +is get the existing job object first and then change some of the inputs.

+
    +
  • Method: PUT
  • +
  • Format: JSON
  • +
+

The same as Create Job.

+
    +
  • OK Response Example:
  • +
+
{
+  "validation-result": [
+      {}
+  ]
+}
+
+
+
+
+

/v1/job/[jid] - [DELETE] - Delete Job

+

Delete a job with id jid.

+
    +
  • Method: DELETE
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Content: None
  • +
+
+
+

/v1/job/[jid]/enable - [PUT] - Enable Job

+

Enable a job with id jid.

+
    +
  • Method: PUT
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Content: None
  • +
+
+
+

/v1/job/[jid]/disable - [PUT] - Disable Job

+

Disable a job with id jid.

+
    +
  • Method: PUT
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Content: None
  • +
+
+
+

/v1/job/[jid]/start or /v1/job/[jname]/start - [PUT]- Start Job

+

Start a job with name [jname] or with id [jid] to trigger the job execution

+
    +
  • Method: POST
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Content: Submission Record
  • +
  • BOOTING Response Example
  • +
+
{
+  "submission": {
+    "progress": -1,
+    "last-update-date": 1415312531188,
+    "external-id": "job_1412137947693_0004",
+    "status": "BOOTING",
+    "job": 2,
+    "creation-date": 1415312531188,
+    "to-schema": {
+      "created": 1415312531426,
+      "name": "HDFS file",
+      "columns": []
+    },
+    "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
+    "from-schema": {
+      "created": 1415312531342,
+      "name": "text",
+      "columns": [
+        {
+          "name": "id",
+          "nullable": true,
+          "unsigned": null,
+          "type": "FIXED_POINT",
+          "size": null
+        },
+        {
+          "name": "txt",
+          "nullable": true,
+          "type": "TEXT",
+          "size": null
+        }
+      ]
+    }
+  }
+}
+
+
+
    +
  • SUCCEEDED Response Example
  • +
+
{
+  submission: {
+    progress: -1,
+    last-update-date: 1415312809485,
+    external-id: "job_1412137947693_0004",
+    status: "SUCCEEDED",
+    job: 2,
+    creation-date: 1415312531188,
+    external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
+    counters: {
+      org.apache.hadoop.mapreduce.JobCounter: {
+        SLOTS_MILLIS_MAPS: 373553,
+        MB_MILLIS_MAPS: 382518272,
+        TOTAL_LAUNCHED_MAPS: 10,
+        MILLIS_MAPS: 373553,
+        VCORES_MILLIS_MAPS: 373553,
+        OTHER_LOCAL_MAPS: 10
+      },
+      org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+        BYTES_WRITTEN: 0
+      },
+      org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+        BYTES_READ: 0
+      },
+      org.apache.hadoop.mapreduce.TaskCounter: {
+        MAP_INPUT_RECORDS: 0,
+        MERGED_MAP_OUTPUTS: 0,
+        PHYSICAL_MEMORY_BYTES: 4065599488,
+        SPILLED_RECORDS: 0,
+        COMMITTED_HEAP_BYTES: 3439853568,
+        CPU_MILLISECONDS: 236900,
+        FAILED_SHUFFLE: 0,
+        VIRTUAL_MEMORY_BYTES: 15231422464,
+        SPLIT_RAW_BYTES: 1187,
+        MAP_OUTPUT_RECORDS: 1000000,
+        GC_TIME_MILLIS: 7282
+      },
+      org.apache.hadoop.mapreduce.FileSystemCounter: {
+        FILE_WRITE_OPS: 0,
+        FILE_READ_OPS: 0,
+        FILE_LARGE_READ_OPS: 0,
+        FILE_BYTES_READ: 0,
+        HDFS_BYTES_READ: 1187,
+        FILE_BYTES_WRITTEN: 1191230,
+        HDFS_LARGE_READ_OPS: 0,
+        HDFS_WRITE_OPS: 10,
+        HDFS_READ_OPS: 10,
+        HDFS_BYTES_WRITTEN: 276389736
+      },
+      org.apache.sqoop.submission.counter.SqoopCounters: {
+        ROWS_READ: 1000000
+      }
+    }
+  }
+}
+
+
+
    +
  • ERROR Response Example
  • +
+
{
+  "submission": {
+    "progress": -1,
+    "last-update-date": 1415312390570,
+    "status": "FAILURE_ON_SUBMIT",
+    "exception": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+    "job": 1,
+    "creation-date": 1415312390570,
+    "to-schema": {
+      "created": 1415312390797,
+      "name": "text",
+      "columns": [
+        {
+          "name": "id",
+          "nullable": true,
+          "unsigned": null,
+          "type": "FIXED_POINT",
+          "size": null
+        },
+        {
+          "name": "txt",
+          "nullable": true,
+          "type": "TEXT",
+          "size": null
+        }
+      ]
+    },
+    "from-schema": {
+      "created": 1415312390778,
+      "name": "HDFS file",
+      "columns": [
+      ]
+    },
+    "exception-trace": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_00"
+  }
+}
+
+
+
+
+

/v1/job/[jid]/stop or /v1/job/[jname]/stop - [PUT]- Stop Job

+

Stop a job with name [janme] or with id [jid] to abort the running job.

+
    +
  • Method: PUT
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Content: Submission Record
  • +
+
+
+

/v1/job/[jid]/status or /v1/job/[jname]/status - [GET]- Get Job Status

+

Get status of the running job with name [janme] or with id [jid]

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Response Content: Submission Record
  • +
+
{
+    "submission": {
+        "progress": 0.25,
+        "last-update-date": 1415312603838,
+        "external-id": "job_1412137947693_0004",
+        "status": "RUNNING",
+        "job": 2,
+        "creation-date": 1415312531188,
+        "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/"
+    }
+}
+
+
+
+
+

/v1/submissions? - [GET] - Get all job Submissions

+

Get all the submissions for every job started in SQoop

+
+
+

/v1/submissions?jname=[jname] - [GET] - Get Submissions by Job

+

Retrieve all job submissions in the past for the given job. Each submission record will have details such as the status, counters and urls for those submissions.

+

Provide the name of the job in the url [jname] part.

+
    +
  • Method: GET
  • +
  • Format: JSON
  • +
  • Request Content: None
  • +
  • Fields of Response:
  • +
+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
progressThe progress of the running Sqoop job
jobThe id of the Sqoop job
creation-dateThe submission timestamp
last-update-dateThe timestamp of the last status update
statusThe status of this job submission
external-idThe job id of Sqoop job running on Hadoop
external-linkThe link to track the job status on Hadoop
+
    +
  • Response Example:
  • +
+
{
+  submissions: [
+    {
+      progress: -1,
+      last-update-date: 1415312809485,
+      external-id: "job_1412137947693_0004",
+      status: "SUCCEEDED",
+      job: 2,
+      creation-date: 1415312531188,
+      external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/",
+      counters: {
+        org.apache.hadoop.mapreduce.JobCounter: {
+          SLOTS_MILLIS_MAPS: 373553,
+          MB_MILLIS_MAPS: 382518272,
+          TOTAL_LAUNCHED_MAPS: 10,
+          MILLIS_MAPS: 373553,
+          VCORES_MILLIS_MAPS: 373553,
+          OTHER_LOCAL_MAPS: 10
+        },
+        org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+          BYTES_WRITTEN: 0
+        },
+        org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+          BYTES_READ: 0
+        },
+        org.apache.hadoop.mapreduce.TaskCounter: {
+          MAP_INPUT_RECORDS: 0,
+          MERGED_MAP_OUTPUTS: 0,
+          PHYSICAL_MEMORY_BYTES: 4065599488,
+          SPILLED_RECORDS: 0,
+          COMMITTED_HEAP_BYTES: 3439853568,
+          CPU_MILLISECONDS: 236900,
+          FAILED_SHUFFLE: 0,
+          VIRTUAL_MEMORY_BYTES: 15231422464,
+          SPLIT_RAW_BYTES: 1187,
+          MAP_OUTPUT_RECORDS: 1000000,
+          GC_TIME_MILLIS: 7282
+        },
+        org.apache.hadoop.mapreduce.FileSystemCounter: {
+          FILE_WRITE_OPS: 0,
+          FILE_READ_OPS: 0,
+          FILE_LARGE_READ_OPS: 0,
+          FILE_BYTES_READ: 0,
+          HDFS_BYTES_READ: 1187,
+          FILE_BYTES_WRITTEN: 1191230,
+          HDFS_LARGE_READ_OPS: 0,
+          HDFS_WRITE_OPS: 10,
+          HDFS_READ_OPS: 10,
+          HDFS_BYTES_WRITTEN: 276389736
+        },
+        org.apache.sqoop.submission.counter.SqoopCounters: {
+          ROWS_READ: 1000000
+        }
+      }
+    },
+    {
+      progress: -1,
+      last-update-date: 1415312390570,
+      status: "FAILURE_ON_SUBMIT",
+      exception: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+      job: 1,
+      creation-date: 1415312390570,
+      exception-trace: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner...."
+    }
+  ]
+}
+
+
+
+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/Sqoop5MinutesDemo.html =================================================================== --- content/resources/docs/1.99.4/Sqoop5MinutesDemo.html (revision 0) +++ content/resources/docs/1.99.4/Sqoop5MinutesDemo.html (working copy) @@ -0,0 +1,292 @@ + + + + + + + + + + Sqoop 5 Minutes Demo — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Sqoop 5 Minutes Demo

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Sqoop 5 Minutes Demo

+

This page will walk you through the basic usage of Sqoop. You need to have installed and configured Sqoop server and client in order to follow this guide. Installation procedure is described on Installation page. Please note that exact output shown in this page might differ from yours as Sqoop evolves. All major information should however remain the same.

+

Sqoop uses unique names or persistent ids to identify connectors, links, jobs and configs. We support querying a entity by its unique name or by its perisent database Id.

+
+

Starting Client

+

Start client in interactive mode using following command:

+
sqoop2-shell
+
+
+

Configure client to use your Sqoop server:

+
sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
+
+
+

Verify that connection is working by simple version checking:

+
sqoop:000> show version --all
+client version:
+  Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
+  Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+server version:
+  Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f
+  Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+API versions:
+  [v1]
+
+
+

You should received similar output as shown above describing the sqoop client build version, the server build version and the supported versions for the rest API.

+

You can use the help command to check all the supported commands in the sqoop shell.

+
+
::
+

sqoop:000> help +For information about Sqoop, visit: http://sqoop.apache.org/

+
+
Available commands:
+
exit (x ) Exit the shell +history (H ) Display, manage and recall edit-line history +help (h ) Display this help message +set (st ) Configure various client options and settings +show (sh ) Display various objects and configuration options +create (cr ) Create new object in Sqoop repository +delete (d ) Delete existing object in Sqoop repository +update (up ) Update objects in Sqoop repository +clone (cl ) Create new object based on existing one +start (sta) Start job +stop (stp) Stop job +status (stu) Display status of a job +enable (en ) Enable object in Sqoop repository +disable (di ) Disable object in Sqoop repository
+
+
+
+
+ +
+

Creating Job Object

+

Connectors implement the From for reading data from and/or To for writing data to. Generic JDBC Connector supports both of them List of supported directions for each connector might be seen in the output of show connector -all command above. In order to create a job we need to specifiy the From and To parts of the job uniquely identified by their link Ids. We already have 2 links created in the system, you can verify the same with the following command

+
+
::
+

sqoop:000> show links -all +2 link(s) to show: +link with id 1 and name First Link (Enabled: true, Created by root at 11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM) +Using Connector id 2

+
+

System Message: ERROR/3 (/Users/gshapira/workspace/sqoop2/docs/src/site/sphinx/Sqoop5MinutesDemo.rst, line 130)

+Unexpected indentation.
+
+
+
Link configuration
+

JDBC Driver Class: com.mysql.jdbc.Driver +JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop +Username: sqoop +Password: +JDBC Connection Properties:

+
+

System Message: ERROR/3 (/Users/gshapira/workspace/sqoop2/docs/src/site/sphinx/Sqoop5MinutesDemo.rst, line 136)

+Unexpected indentation.
+
+
protocol = tcp
+
+
+
+
+

System Message: WARNING/2 (/Users/gshapira/workspace/sqoop2/docs/src/site/sphinx/Sqoop5MinutesDemo.rst, line 137)

+Block quote ends without a blank line; unexpected unindent.
+

link with id 2 and name Second Link (Enabled: true, Created by root at 11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM) +Using Connector id 1

+
+

System Message: ERROR/3 (/Users/gshapira/workspace/sqoop2/docs/src/site/sphinx/Sqoop5MinutesDemo.rst, line 139)

+Unexpected indentation.
+
+
+
Link configuration
+
HDFS URI: hdfs://nameservice1:8020/
+
+
+
+
+

Next, we can use the two link Ids to associate the From and To for the job.

+
 sqoop:000> create job -f 1 -t 2
+ Creating job for links with from id 1 and to id 2
+ Please fill following values to create new job object
+ Name: Sqoopy
+
+ FromJob configuration
+
+  Schema name:(Required)sqoop
+  Table name:(Required)sqoop
+  Table SQL statement:(Optional)
+  Table column names:(Optional)
+  Partition column name:(Optional) id
+  Null value allowed for the partition column:(Optional)
+  Boundary query:(Optional)
+
+ToJob configuration
+
+ Output format:
+  0 : TEXT_FILE
+  1 : SEQUENCE_FILE
+      Output format:
+        0 : TEXT_FILE
+        1 : SEQUENCE_FILE
+      Choose: 0
+      Compression format:
+        0 : NONE
+        1 : DEFAULT
+        2 : DEFLATE
+        3 : GZIP
+        4 : BZIP2
+        5 : LZO
+        6 : LZ4
+        7 : SNAPPY
+        8 : CUSTOM
+      Choose: 0
+      Custom compression format:(Optional)
+      Output directory:(Required)/root/projects/sqoop
+
+      Driver Config
+
+      Extractors: 2
+      Loaders: 2
+      New job was successfully created with validation status OK  and persistent id 1
+
+
+

Our new job object was created with assigned id 1.

+
+
+

Start Job ( a.k.a Data transfer )

+
+

System Message: WARNING/2 (/Users/gshapira/workspace/sqoop2/docs/src/site/sphinx/Sqoop5MinutesDemo.rst, line 192)

+

Title underline too short.

+
Start Job ( a.k.a Data transfer )
+================================
+
+
+
+

You can start a sqoop job with the following command:

+
sqoop:000> start job --jid 1
+Submission details
+Job ID: 1
+Server URL: http://localhost:12000/sqoop/
+Created by: root
+Creation date: 2014-11-04 19:43:29 PST
+Lastly updated by: root
+External ID: job_1412137947693_0001
+  http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+
+
+

You can iteratively check your running job status with status job command:

+
sqoop:000> status job --jid 1
+Submission details
+Job ID: 1
+Server URL: http://localhost:12000/sqoop/
+Created by: root
+Creation date: 2014-11-04 19:43:29 PST
+Lastly updated by: root
+External ID: job_1412137947693_0001
+  http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+2014-11-04 20:09:16 PST: RUNNING  - 0.00 %
+
+
+

And finally you can stop running the job at any time using stop job command:

+
sqoop:000> stop job --jid 1
+
+
+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/Tools.html =================================================================== --- content/resources/docs/1.99.4/Tools.html (revision 0) +++ content/resources/docs/1.99.4/Tools.html (working copy) @@ -0,0 +1,167 @@ + + + + + + + + + + Tools — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Tools

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Tools

+

Tools are server commands that administrators can execute on the Sqoop server machine in order to perform various maintenance tasks. The tool execution will always perform a given task and finish. There are no long running services implemented as tools.

+

In order to perform the maintenance task each tool is suppose to do, they need to be executed in exactly the same environment as the main Sqoop server. The tool binary will take care of setting up the CLASSPATH and other environmental variables that might be required. However it’s up to the administrator himself to run the tool under the same user as is used for the server. This is usually configured automatically for various Hadoop distributions (such as Apache Bigtop).

+
+

Note

+

Running tools while the Sqoop Server is also running is not recommended as it might lead to a data corruption and service disruption.

+
+

List of available tools:

+ +

To run the desired tool, execute binary sqoop2-tool with desired tool name. For example to run verify tool:

+
sqoop2-tool verify
+
+
+
+

Note

+

Stop the Sqoop Server before running Sqoop tools. Running tools while Sqoop Server is running can lead to a data corruption and service disruption.

+
+
+

Verify

+

The verify tool will verify Sqoop server configuration by starting all subsystems with the exception of servlets and tearing them down.

+

To run the verify tool:

+
sqoop2-tool verify
+
+
+

If the verification process succeeds, you should see messages like:

+
Verification was successful.
+Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+
+

If the verification process will find any inconsistencies, it will print out the following message instead:

+
Verification has failed, please check Server logs for further details.
+Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
+
+
+

Further details why the verification has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.

+
+
+

Upgrade

+

Upgrades all versionable components inside Sqoop2. This includes structural changes inside the repository and stored metadata. +Running this tool on Sqoop deployment that was already upgraded will have no effect.

+

To run the upgrade tool:

+
sqoop2-tool upgrade
+
+
+

Upon successful upgrade you should see following message:

+
Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+
+

Execution failure will show the following message instead:

+
Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
+
+
+

Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.

+
+
+

RepositoryDump

+

Writes the user-created contents of the Sqoop repository to a file in JSON format. This includes connections, jobs and submissions.

+

To run the repositorydump tool:

+
sqoop2-tool repositorydump -o repository.json
+
+
+

As an option, the administrator can choose to include sensitive information such as database connection passwords in the file:

+
sqoop2-tool repositorydump -o repository.json --include-sensitive
+
+
+

Upon successful execution, you should see the following message:

+
Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished correctly.
+
+
+

If repository dump has failed, you will see the following message instead:

+
Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
+
+
+

Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.

+
+
+

RepositoryLoad

+

Reads a json formatted file created by RepositoryDump and loads to current Sqoop repository.

+

To run the repositoryLoad tool:

+
sqoop2-tool repositoryload -i repository.json
+
+
+

Upon successful execution, you should see the following message:

+
Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished correctly.
+
+
+

If repository load failed you will see the following message instead:

+
Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
+
+
+

Or an exception. Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into.

+
+

Note

+

If the repository dump was created without passwords (default), the connections will not contain a password and the jobs will fail to execute. In that case you’ll need to manually update the connections and set the password.

+
+
+

Note

+

RepositoryLoad tool will always generate new connections, jobs and submissions from the file. Even when an identical objects already exists in repository.

+
+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/Upgrade.html =================================================================== --- content/resources/docs/1.99.4/Upgrade.html (revision 0) +++ content/resources/docs/1.99.4/Upgrade.html (working copy) @@ -0,0 +1,121 @@ + + + + + + + + + + Upgrade — Apache Sqoop documentation + + + + + + + + + + + + + +

+ Apache Sqoop documentation

+

Upgrade

+
+
+ +

+ Contents +

+ +
+
+ + +
+

Upgrade

+

This page describes procedure that you need to take in order to upgrade Sqoop from one release to a higher release. Upgrading both client and server component will be discussed separately.

+
+

Note

+

Only updates from one Sqoop 2 release to another are covered, starting with upgrades from version 1.99.2. This guide do not contain general information how to upgrade from Sqoop 1 to Sqoop 2.

+
+
+

Upgrading Server

+

As Sqoop server is using a database repository for persisting sqoop entities such as the connector, driver, links and jobs the repository schema might need to be updated as part of the server upgrade. In addition the configs and inputs described by the various connectors and the driver may also change with a new server version and might need a data upgrade.

+

There are two ways how to upgrade Sqoop entities in the repository, you can either execute upgrade tool or configure the sqoop server to perform all necessary upgrades on start up.

+

It’s strongly advised to back up the repository before moving on to next steps. Backup instructions will vary depending on the repository implementation. For example, using MySQL as a repository will require a different back procedure than Apache Derby. Please follow the repositories’ backup procedure.

+
+

Upgrading Server using upgrade tool

+

Preferred upgrade path is to explicitly run the Upgrade Tool. First step is to however shutdown the server as having both the server and upgrade utility accessing the same repository might corrupt it:

+
sqoop2-server stop
+
+
+

When the server has been successfully stopped, you can update the server bits and simply run the upgrade tool:

+
sqoop2-tool upgrade
+
+
+

You should see that the upgrade process has been successful:

+
Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+
+

In case of any failure, please take a look into Upgrade Tool documentation page.

+
+
+

Upgrading Server on start-up

+

The capability of performing the upgrade has been built-in to the server, however is disabled by default to avoid any unintentional changes to the repository. You can start the repository schema upgrade procedure by stopping the server:

+
sqoop2-server stop
+
+
+

Before starting the server again you will need to enable the auto-upgrade feature that will perform all necessary changes during Sqoop Server start up.

+

You need to set the following property in configuration file sqoop.properties for the repository schema upgrade.

+
org.apache.sqoop.repository.schema.immutable=false
+
+
+

You need to set the following property in configuration file sqoop.properties for the connector config data upgrade.

+
org.apache.sqoop.connector.autoupgrade=true
+
+
+

You need to set the following property in configuration file sqoop.properties for the driver config data upgrade.

+
org.apache.sqoop.driver.autoupgrade=true
+
+
+

When all properties are set, start the sqoop server using the following command:

+
sqoop2-server start
+
+
+

All required actions will be performed automatically during the server bootstrap. It’s strongly advised to set all three properties to their original values once the server has been successfully started and the upgrade has completed

+
+
+
+

Upgrading Client

+

Client do not require any manual steps during upgrade. Replacing the binaries with updated version is sufficient.

+
+
+ + +
+
+ +

+ Contents +

+ +
+ + + + \ No newline at end of file Index: content/resources/docs/1.99.4/_sources/BuildingSqoop2.txt =================================================================== --- content/resources/docs/1.99.4/_sources/BuildingSqoop2.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/BuildingSqoop2.txt (working copy) @@ -0,0 +1,69 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +================================ +Building Sqoop2 from source code +================================ + +This guide will show you how to build Sqoop2 from source code. Sqoop is using `maven `_ as build system. You you will need to use at least version 3.0 as older versions will not work correctly. All other dependencies will be downloaded by maven automatically. With exception of special JDBC drivers that are needed only for advanced integration tests. + +Downloading source code +----------------------- + +Sqoop project is using git as a revision control system hosted at Apache Software Foundation. You can clone entire repository using following command: + +:: + + git clone https://git-wip-us.apache.org/repos/asf/sqoop.git sqoop2 + +Sqoop2 is currently developed in special branch ``sqoop2`` that you need to check out after clone: + +:: + + cd sqoop2 + git checkout sqoop2 + +Building project +---------------- + +You can use usual maven targets like ``compile`` or ``package`` to build the project. Sqoop supports two major Hadoop revisions at the moment - 1.x and 2.x. As compiled code for one Hadoop major version can't be used on another, you must compile Sqoop against appropriate Hadoop version. You can change the target Hadoop version by specifying ``-Dhadoop.profile=$hadoopVersion`` on the maven command line. Possible values of ``$hadoopVersions`` are 100 and 200 for Hadoop version 1.x and 2.x respectively. Sqoop will compile against Hadoop 2 by default. Following example will compile Sqoop against Hadoop 1.x: + +:: + + mvn compile -Dhadoop.profile=100 + +Maven target ``package`` can be used to create Sqoop packages similar to the ones that are officially available for download. Sqoop will build only source tarball by default. You need to specify ``-Pbinary`` to build binary distribution. You might need to explicitly specify Hadoop version if the default is not accurate. + +:: + + mvn package -Pbinary + +Running tests +------------- + +Sqoop supports two different sets of tests. First smaller and much faster set is called unit tests and will be executed on maven target ``test``. Second larger set of integration tests will be executed on maven target ``integration-test``. Please note that integration tests might require manual steps for installing various JDBC drivers into your local maven cache. + +Example for running unit tests: + +:: + + mvn test + +Example for running integration tests: + +:: + + mvn integration-test Index: content/resources/docs/1.99.4/_sources/ClientAPI.txt =================================================================== --- content/resources/docs/1.99.4/_sources/ClientAPI.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/ClientAPI.txt (working copy) @@ -0,0 +1,304 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +=========================== +Sqoop Java Client API Guide +=========================== + +This document will explain how to use Sqoop Java Client API with external application. Client API allows you to execute the functions of sqoop commands. It requires Sqoop Client JAR and its dependencies. + +The main class that provides wrapper methods for all the supported operations is the +:: + + public class SqoopClient { + ... + } + +Java Client API is explained using Generic JDBC Connector example. Before executing the application using the sqoop client API, check whether sqoop server is running. + +Workflow +======== + +Given workflow has to be followed for executing a sqoop job in Sqoop server. + + 1. Create LINK object for a given connectorId - Creates Link object and returns linkId (lid) + 2. Create a JOB for a given "from" and "to" linkId - Create Job object and returns jobId (jid) + 3. Start the JOB for a given jobId - Start Job on the server and creates a submission record + +Project Dependencies +==================== +Here given maven dependency + +:: + + + org.apache.sqoop + sqoop-client + ${requestedVersion} + + +Initialization +============== + +First initialize the SqoopClient class with server URL as argument. + +:: + + String url = "http://localhost:12000/sqoop/"; + SqoopClient client = new SqoopClient(url); + +Server URL value can be modfied by setting value to setServerUrl(String) method + +:: + + client.setServerUrl(newUrl); + + +Link +==== +Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The registered connector implementation will provide logic to read from and/or write to a data source that it represents. A connector can have one or more links associated with it. The java client API allows you to create, update and delete a link for any registered connector. Creating or updating a link requires you to populate the Link Config for that particular connector. Hence the first thing to do is get the list of registered connectors and select the connector for which you would like to create a link. Then +you can get the list of all the config/inputs using `Display Config and Input Names For Connector`_ for that connector. + + +Save Link +--------- + +First create a new link by invoking ``createLink(cid)`` method with connector Id and it returns a MLink object with dummy id and the unfilled link config inputs for that connector. Then fill the config inputs with relevant values. Invoke ``saveLink`` passing it the filled MLink object. + +:: + + // create a placeholder for link + long connectorId = 1; + MLink link = client.createLink(connectorId); + link.setName("Vampire"); + link.setCreationUser("Buffy"); + MLinkConfig linkConfig = link.getConnectorLinkConfig(); + // fill in the link config values + linkConfig.getStringInput("linkConfig.connectionString").setValue("jdbc:mysql://localhost/my"); + linkConfig.getStringInput("linkConfig.jdbcDriver").setValue("com.mysql.jdbc.Driver"); + linkConfig.getStringInput("linkConfig.username").setValue("root"); + linkConfig.getStringInput("linkConfig.password").setValue("root"); + // save the link object that was filled + Status status = client.saveLink(link); + if(status.canProceed()) { + System.out.println("Created Link with Link Id : " + link.getPersistenceId()); + } else { + System.out.println("Something went wrong creating the link"); + } + +``status.canProceed()`` returns true if status is OK or a WARNING. Before sending the status, the link config values are validated using the corresponding validator associated with th link config inputs. + +On successful execution of the saveLink method, new link Id is assigned to the link object else an exception is thrown. ``link.getPersistenceId()`` method returns the unique Id for this object persisted in the sqoop repository. + +User can retrieve a link using the following methods + ++----------------------------+--------------------------------------+ +| Method | Description | ++============================+======================================+ +| ``getLink(lid)`` | Returns a link by id | ++----------------------------+--------------------------------------+ +| ``getLinks()`` | Returns list of links in the sqoop | ++----------------------------+--------------------------------------+ + +Job +=== + +A sqoop job holds the ``From`` and ``To`` parts for transferring data from the ``From`` data source to the ``To`` data source. Both the ``From`` and the ``To`` are uniquely identified by their corresponding connector Link Ids. i.e when creating a job we have to specifiy the ``FromLinkId`` and the ``ToLinkId``. Thus the pre-requisite for creating a job is to first create the links as described above. + +Once the linkIds for the ``From`` and ``To`` are given, then the job configs for the associated connector for the link object have to be filled. You can get the list of all the from and to job config/inputs using `Display Config and Input Names For Connector`_ for that connector. A connector can have one or more links. We then use the links in the ``From`` and ``To`` direction to populate the corresponding ``MFromConfig`` and ``MToConfig`` respectively. + +In addition to filling the job configs for the ``From`` and the ``To`` representing the link, we also need to fill the driver configs that control the job execution engine environment. For example, if the job execution engine happens to be the MapReduce we will specifiy the number of mappers to be used in reading data from the ``From`` data source. + +Save Job +--------- +Here is the code to create and then save a job +:: + + String url = "http://localhost:12000/sqoop/"; + SqoopClient client = new SqoopClient(url); + //Creating dummy job object + long fromLinkId = 1;// for jdbc connector + long toLinkId = 2; // for HDFS connector + MJob job = client.createJob(fromLinkId, toLinkId); + job.setName("Vampire"); + job.setCreationUser("Buffy"); + // set the "FROM" link job config values + MFromConfig fromJobConfig = job.getFromJobConfig(); + fromJobConfig.getStringInput("fromJobConfig.schemaName").setValue("sqoop"); + fromJobConfig.getStringInput("fromJobConfig.tableName").setValue("sqoop"); + fromJobConfig.getStringInput("fromJobConfig.partitionColumn").setValue("id"); + // set the "TO" link job config values + MToConfig toJobConfig = job.getToJobConfig(); + toJobConfig.getStringInput("toJobConfig.outputDirectory").setValue("/usr/tmp"); + // set the driver config values + MDriverConfig driverConfig = job.getDriverConfig(); + driverConfig.getStringInput("throttlingConfig.numExtractors").setValue("3"); + + Status status = client.saveJob(job); + if(status.canProceed()) { + System.out.println("Created Job with Job Id: "+ job.getPersistenceId()); + } else { + System.out.println("Something went wrong creating the job"); + } + +User can retrieve a job using the following methods + ++----------------------------+--------------------------------------+ +| Method | Description | ++============================+======================================+ +| ``getJob(jid)`` | Returns a job by id | ++----------------------------+--------------------------------------+ +| ``getJobs()`` | Returns list of jobs in the sqoop | ++----------------------------+--------------------------------------+ + + +List of status codes +-------------------- + ++------------------+------------------------------------------------------------------------------------------------------------+ +| Function | Description | ++==================+============================================================================================================+ +| ``OK`` | There are no issues, no warnings. | ++------------------+------------------------------------------------------------------------------------------------------------+ +| ``WARNING`` | Validated entity is correct enough to be proceed. Not a fatal error | ++------------------+------------------------------------------------------------------------------------------------------------+ +| ``ERROR`` | There are serious issues with validated entity. We can't proceed until reported issues will be resolved. | ++------------------+------------------------------------------------------------------------------------------------------------+ + +View Error or Warning valdiation message +---------------------------------------- + +In case of any WARNING AND ERROR status, user has to iterate the list of validation messages. + +:: + + printMessage(link.getConnectorLinkConfig().getConfigs()); + + private static void printMessage(List configs) { + for(MConfig config : configs) { + List> inputlist = config.getInputs(); + if (config.getValidationMessages() != null) { + // print every validation message + for(Message message : config.getValidationMessages()) { + System.out.println("Config validation message: " + message.getMessage()); + } + } + for (MInput minput : inputlist) { + if (minput.getValidationStatus() == Status.WARNING) { + for(Message message : config.getValidationMessages()) { + System.out.println("Config Input Validation Warning: " + message.getMessage()); + } + } + else if (minput.getValidationStatus() == Status.ERROR) { + for(Message message : config.getValidationMessages()) { + System.out.println("Config Input Validation Error: " + message.getMessage()); + } + } + } + } + +Updating link and job +--------------------- +After creating link or job in the repository, you can update or delete a link or job using the following functions + ++----------------------------------+------------------------------------------------------------------------------------+ +| Method | Description | ++==================================+====================================================================================+ +| ``updateLink(link)`` | Invoke update with link and check status for any errors or warnings | ++----------------------------------+------------------------------------------------------------------------------------+ +| ``deleteLink(lid)`` | Delete link. Deletes only if specified link is not used by any job | ++----------------------------------+------------------------------------------------------------------------------------+ +| ``updateJob(job)`` | Invoke update with job and check status for any errors or warnings | ++----------------------------------+------------------------------------------------------------------------------------+ +| ``deleteJob(jid)`` | Delete job | ++----------------------------------+------------------------------------------------------------------------------------+ + +Job Start +============== + +Starting a job requires a job id. On successful start, getStatus() method returns "BOOTING" or "RUNNING". + +:: + + //Job start + long jobId = 1; + MSubmission submission = client.startJob(jobId); + System.out.println("Job Submission Status : " + submission.getStatus()); + if(submission.getStatus().isRunning() && submission.getProgress() != -1) { + System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100)); + } + System.out.println("Hadoop job id :" + submission.getExternalId()); + System.out.println("Job link : " + submission.getExternalLink()); + Counters counters = submission.getCounters(); + if(counters != null) { + System.out.println("Counters:"); + for(CounterGroup group : counters) { + System.out.print("\t"); + System.out.println(group.getName()); + for(Counter counter : group) { + System.out.print("\t\t"); + System.out.print(counter.getName()); + System.out.print(": "); + System.out.println(counter.getValue()); + } + } + } + if(submission.getExceptionInfo() != null) { + System.out.println("Exception info : " +submission.getExceptionInfo()); + } + + + //Check job status for a running job + MSubmission submission = client.getJobStatus(jobId); + if(submission.getStatus().isRunning() && submission.getProgress() != -1) { + System.out.println("Progress : " + String.format("%.2f %%", submission.getProgress() * 100)); + } + + //Stop a running job + submission.stopJob(jobId); + +Above code block, job start is asynchronous. For synchronous job start, use ``startJob(jid, callback, pollTime)`` method. If you are not interested in getting the job status, then invoke the same method with "null" as the value for the callback parameter and this returns the final job status. ``pollTime`` is the request interval for getting the job status from sqoop server and the value should be greater than zero. We will frequently hit the sqoop server if a low value is given for the ``pollTime``. When a synchronous job is started with a non null callback, it first invokes the callback's ``submitted(MSubmission)`` method on successful start, after every poll time interval, it then invokes the ``updated(MSubmission)`` method on the callback API and finally on finishing the job executuon it invokes the ``finished(MSubmission)`` method on the callback API. + +Display Config and Input Names For Connector +============================================ + +You can view the config/input names for the link and job config types per connector + +:: + + String url = "http://localhost:12000/sqoop/"; + SqoopClient client = new SqoopClient(url); + long connectorId = 1; + // link config for connector + describe(client.getConnector(connectorId).getLinkConfig().getConfigs(), client.getConnectorConfigBundle(connectorId)); + // from job config for connector + describe(client.getConnector(connectorId).getFromConfig().getConfigs(), client.getConnectorConfigBundle(connectorId)); + // to job config for the connector + describe(client.getConnector(connectorId).getToConfig().getConfigs(), client.getConnectorConfigBundle(connectorId)); + + void describe(List configs, ResourceBundle resource) { + for (MConfig config : configs) { + System.out.println(resource.getString(config.getLabelKey())+":"); + List> inputs = config.getInputs(); + for (MInput input : inputs) { + System.out.println(resource.getString(input.getLabelKey()) + " : " + input.getValue()); + } + System.out.println(); + } + } + + +Above Sqoop 2 Client API tutorial explained how to create a link, create job and and then start the job. Index: content/resources/docs/1.99.4/_sources/CommandLineClient.txt =================================================================== --- content/resources/docs/1.99.4/_sources/CommandLineClient.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/CommandLineClient.txt (working copy) @@ -0,0 +1,533 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +=================== +Command Line Shell +=================== + +Sqoop 2 provides command line shell that is capable of communicating with Sqoop 2 server using REST interface. Client is able to run in two modes - interactive and batch mode. Commands ``create``, ``update`` and ``clone`` are not currently supported in batch mode. Interactive mode supports all available commands. + +You can start Sqoop 2 client in interactive mode using command ``sqoop2-shell``:: + + sqoop2-shell + +Batch mode can be started by adding additional argument representing path to your Sqoop client script: :: + + sqoop2-shell /path/to/your/script.sqoop + +Sqoop client script is expected to contain valid Sqoop client commands, empty lines and lines starting with ``#`` that are denoting comment lines. Comments and empty lines are ignored, all other lines are interpreted. Example script: :: + + # Specify company server + set server --host sqoop2.company.net + + # Executing given job + start job --jid 1 + + +.. contents:: Table of Contents + +Resource file +============= + +Sqoop 2 client have ability to load resource files similarly as other command line tools. At the beginning of execution Sqoop client will check existence of file ``.sqoop2rc`` in home directory of currently logged user. If such file exists, it will be interpreted before any additional actions. This file is loaded in both interactive and batch mode. It can be used to execute any batch compatible commands. + +Example resource file: :: + + # Configure our Sqoop 2 server automatically + set server --host sqoop2.company.net + + # Run in verbose mode by default + set option --name verbose --value true + +Commands +======== + +Sqoop 2 contains several commands that will be documented in this section. Each command have one more functions that are accepting various arguments. Not all commands are supported in both interactive and batch mode. + +Auxiliary Commands +------------------ + +Auxiliary commands are commands that are improving user experience and are running purely on client side. Thus they do not need working connection to the server. + +* ``exit`` Exit client immediately. This command can be also executed by sending EOT (end of transmission) character. It's CTRL+D on most common Linux shells like Bash or Zsh. +* ``history`` Print out command history. Please note that Sqoop client is saving history from previous executions and thus you might see commands that you've executed in previous runs. +* ``help`` Show all available commands with short in-shell documentation. + +:: + + sqoop:000> help + For information about Sqoop, visit: http://sqoop.apache.org/ + + Available commands: + exit (\x ) Exit the shell + history (\H ) Display, manage and recall edit-line history + help (\h ) Display this help message + set (\st ) Configure various client options and settings + show (\sh ) Display various objects and configuration options + create (\cr ) Create new object in Sqoop repository + delete (\d ) Delete existing object in Sqoop repository + update (\up ) Update objects in Sqoop repository + clone (\cl ) Create new object based on existing one + start (\sta) Start job + stop (\stp) Stop job + status (\stu) Display status of a job + enable (\en ) Enable object in Sqoop repository + disable (\di ) Disable object in Sqoop repository + +Set Command +----------- + +Set command allows to set various properties of the client. Similarly as auxiliary commands, set do not require connection to Sqoop server. Set commands is not used to reconfigure Sqoop server. + +Available functions: + ++---------------+------------------------------------------+ +| Function | Description | ++===============+==========================================+ +| ``server`` | Set connection configuration for server | ++---------------+------------------------------------------+ +| ``option`` | Set various client side options | ++---------------+------------------------------------------+ + +Set Server Function +~~~~~~~~~~~~~~~~~~~ + +Configure connection to Sqoop server - host port and web application name. Available arguments: + ++-----------------------+---------------+--------------------------------------------------+ +| Argument | Default value | Description | ++=======================+===============+==================================================+ +| ``-h``, ``--host`` | localhost | Server name (FQDN) where Sqoop server is running | ++-----------------------+---------------+--------------------------------------------------+ +| ``-p``, ``--port`` | 12000 | TCP Port | ++-----------------------+---------------+--------------------------------------------------+ +| ``-w``, ``--webapp`` | sqoop | Tomcat's web application name | ++-----------------------+---------------+--------------------------------------------------+ +| ``-u``, ``--url`` | | Sqoop Server in url format | ++-----------------------+---------------+--------------------------------------------------+ + +Example: :: + + set server --host sqoop2.company.net --port 80 --webapp sqoop + +or :: + + set server --url http://sqoop2.company.net:80/sqoop + +Note: When ``--url`` option is given, ``--host``, ``--port`` or ``--webapp`` option will be ignored. + +Set Option Function +~~~~~~~~~~~~~~~~~~~ + +Configure Sqoop client related options. This function have two required arguments ``name`` and ``value``. Name represents internal property name and value holds new value that should be set. List of available option names follows: + ++-------------------+---------------+---------------------------------------------------------------------+ +| Option name | Default value | Description | ++===================+===============+=====================================================================+ +| ``verbose`` | false | Client will print additional information if verbose mode is enabled | ++-------------------+---------------+---------------------------------------------------------------------+ +| ``poll-timeout`` | 10000 | Server poll timeout in milliseconds | ++-------------------+---------------+---------------------------------------------------------------------+ + +Example: :: + + set option --name verbose --value true + set option --name poll-timeout --value 20000 + +Show Command +------------ + +Show commands displays various information as described below. + +Available functions: + ++----------------+--------------------------------------------------------------------------------------------------------+ +| Function | Description | ++================+========================================================================================================+ +| ``server`` | Display connection information to the sqoop server (host, port, webapp) | ++----------------+--------------------------------------------------------------------------------------------------------+ +| ``option`` | Display various client side options | ++----------------+--------------------------------------------------------------------------------------------------------+ +| ``version`` | Show client build version, with an option -all it shows server build version and supported api versions| ++----------------+--------------------------------------------------------------------------------------------------------+ +| ``connector`` | Show connector configurable and its related configs | ++----------------+--------------------------------------------------------------------------------------------------------+ +| ``driver`` | Show driver configurable and its related configs | ++----------------+--------------------------------------------------------------------------------------------------------+ +| ``link`` | Show links in sqoop | ++----------------+--------------------------------------------------------------------------------------------------------+ +| ``job`` | Show jobs in sqoop | ++----------------+--------------------------------------------------------------------------------------------------------+ + +Show Server Function +~~~~~~~~~~~~~~~~~~~~ + +Show details about connection to Sqoop server. + ++-----------------------+--------------------------------------------------------------+ +| Argument | Description | ++=======================+==============================================================+ +| ``-a``, ``--all`` | Show all connection related information (host, port, webapp) | ++-----------------------+--------------------------------------------------------------+ +| ``-h``, ``--host`` | Show host | ++-----------------------+--------------------------------------------------------------+ +| ``-p``, ``--port`` | Show port | ++-----------------------+--------------------------------------------------------------+ +| ``-w``, ``--webapp`` | Show web application name | ++-----------------------+--------------------------------------------------------------+ + +Example: :: + + show server --all + +Show Option Function +~~~~~~~~~~~~~~~~~~~~ + +Show values of various client side options. This function will show all client options when called without arguments. + ++-----------------------+--------------------------------------------------------------+ +| Argument | Description | ++=======================+==============================================================+ +| ``-n``, ``--name`` | Show client option value with given name | ++-----------------------+--------------------------------------------------------------+ + +Please check table in `Set Option Function`_ section to get a list of all supported option names. + +Example: :: + + show option --name verbose + +Show Version Function +~~~~~~~~~~~~~~~~~~~~~ + +Show build versions of both client and server as well as the supported rest api versions. + ++------------------------+-----------------------------------------------+ +| Argument | Description | ++========================+===============================================+ +| ``-a``, ``--all`` | Show all versions (server, client, api) | ++------------------------+-----------------------------------------------+ +| ``-c``, ``--client`` | Show client build version | ++------------------------+-----------------------------------------------+ +| ``-s``, ``--server`` | Show server build version | ++------------------------+-----------------------------------------------+ +| ``-p``, ``--api`` | Show supported api versions | ++------------------------+-----------------------------------------------+ + +Example: :: + + show version --all + +Show Connector Function +~~~~~~~~~~~~~~~~~~~~~~~ + +Show persisted connector configurable and its related configs used in creating associated link and job objects + ++-----------------------+------------------------------------------------+ +| Argument | Description | ++=======================+================================================+ +| ``-a``, ``--all`` | Show information for all connectors | ++-----------------------+------------------------------------------------+ +| ``-c``, ``--cid `` | Show information for connector with id ```` | ++-----------------------+------------------------------------------------+ + +Example: :: + + show connector --all or show connector + +Show Driver Function +~~~~~~~~~~~~~~~~~~~~ + +Show persisted driver configurable and its related configs used in creating job objects + +This function do not have any extra arguments. There is only one registered driver in sqoop + +Example: :: + + show driver + +Show Link Function +~~~~~~~~~~~~~~~~~~ + +Show persisted link objects. + ++-----------------------+------------------------------------------------------+ +| Argument | Description | ++=======================+======================================================+ +| ``-a``, ``--all`` | Show all available links | ++-----------------------+------------------------------------------------------+ +| ``-x``, ``--lid `` | Show link with id ```` | ++-----------------------+------------------------------------------------------+ + +Example: :: + + show link --all or show link + +Show Job Function +~~~~~~~~~~~~~~~~~ + +Show persisted job objects. + ++-----------------------+----------------------------------------------+ +| Argument | Description | ++=======================+==============================================+ +| ``-a``, ``--all`` | Show all available jobs | ++-----------------------+----------------------------------------------+ +| ``-j``, ``--jid `` | Show job with id ```` | ++-----------------------+----------------------------------------------+ + +Example: :: + + show job --all or show job + +Show Submission Function +~~~~~~~~~~~~~~~~~~~~~~~~ + +Show persisted job submission objects. + ++-----------------------+---------------------------------------------+ +| Argument | Description | ++=======================+=============================================+ +| ``-j``, ``--jid `` | Show available submissions for given job | ++-----------------------+---------------------------------------------+ +| ``-d``, ``--detail`` | Show job submissions in full details | ++-----------------------+---------------------------------------------+ + +Example: :: + + show submission + show submission --jid 1 + show submission --jid 1 --detail + +Create Command +-------------- + +Creates new link and job objects. This command is supported only in interactive mode. It will ask user to enter the link config and job configs for from /to and driver when creating link and job objects respectively. + +Available functions: + ++----------------+-------------------------------------------------+ +| Function | Description | ++================+=================================================+ +| ``link`` | Create new link object | ++----------------+-------------------------------------------------+ +| ``job`` | Create new job object | ++----------------+-------------------------------------------------+ + +Create Link Function +~~~~~~~~~~~~~~~~~~~~ + +Create new link object. + ++------------------------+-------------------------------------------------------------+ +| Argument | Description | ++========================+=============================================================+ +| ``-c``, ``--cid `` | Create new link object for connector with id ```` | ++------------------------+-------------------------------------------------------------+ + + +Example: :: + + create link --cid 1 or create link -c 1 + +Create Job Function +~~~~~~~~~~~~~~~~~~~ + +Create new job object. + ++------------------------+------------------------------------------------------------------+ +| Argument | Description | ++========================+==================================================================+ +| ``-f``, ``--from `` | Create new job object with a FROM link with id ```` | ++------------------------+------------------------------------------------------------------+ +| ``-t``, ``--to `` | Create new job object with a TO link with id ```` | ++------------------------+------------------------------------------------------------------+ + +Example: :: + + create job --from 1 --to 2 or create job --f 1 --t 2 + +Update Command +-------------- + +Update commands allows you to edit link and job objects. This command is supported only in interactive mode. + +Update Link Function +~~~~~~~~~~~~~~~~~~~~ + +Update existing link object. + ++-----------------------+---------------------------------------------+ +| Argument | Description | ++=======================+=============================================+ +| ``-x``, ``--lid `` | Update existing link with id ```` | ++-----------------------+---------------------------------------------+ + +Example: :: + + update link --lid 1 + +Update Job Function +~~~~~~~~~~~~~~~~~~~ + +Update existing job object. + ++-----------------------+--------------------------------------------+ +| Argument | Description | ++=======================+============================================+ +| ``-j``, ``--jid `` | Update existing job object with id ```` | ++-----------------------+--------------------------------------------+ + +Example: :: + + update job --jid 1 + + +Delete Command +-------------- + +Deletes link and job objects from Sqoop server. + +Delete Link Function +~~~~~~~~~~~~~~~~~~~~ + +Delete existing link object. + ++-----------------------+-------------------------------------------+ +| Argument | Description | ++=======================+===========================================+ +| ``-x``, ``--lid `` | Delete link object with id ```` | ++-----------------------+-------------------------------------------+ + +Example: :: + + delete link --lid 1 + + +Delete Job Function +~~~~~~~~~~~~~~~~~~~ + +Delete existing job object. + ++-----------------------+------------------------------------------+ +| Argument | Description | ++=======================+==========================================+ +| ``-j``, ``--jid `` | Delete job object with id ```` | ++-----------------------+------------------------------------------+ + +Example: :: + + delete job --jid 1 + + +Clone Command +------------- + +Clone command will load existing link or job object from Sqoop server and allow user in place updates that will result in creation of new link or job object. This command is not supported in batch mode. + +Clone Link Function +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Clone existing link object. + ++-----------------------+------------------------------------------+ +| Argument | Description | ++=======================+==========================================+ +| ``-x``, ``--lid `` | Clone link object with id ```` | ++-----------------------+------------------------------------------+ + +Example: :: + + clone link --lid 1 + + +Clone Job Function +~~~~~~~~~~~~~~~~~~ + +Clone existing job object. + ++-----------------------+------------------------------------------+ +| Argument | Description | ++=======================+==========================================+ +| ``-j``, ``--jid `` | Clone job object with id ```` | ++-----------------------+------------------------------------------+ + +Example: :: + + clone job --jid 1 + +Start Command +------------- + +Start command will begin execution of an existing Sqoop job. + +Start Job Function +~~~~~~~~~~~~~~~~~~ + +Start job (submit new submission). Starting already running job is considered as invalid operation. + ++----------------------------+----------------------------+ +| Argument | Description | ++============================+============================+ +| ``-j``, ``--jid `` | Start job with id ```` | ++----------------------------+----------------------------+ +| ``-s``, ``--synchronous`` | Synchoronous job execution | ++----------------------------+----------------------------+ + +Example: :: + + start job --jid 1 + start job --jid 1 --synchronous + +Stop Command +------------ + +Stop command will interrupt an job execution. + +Stop Job Function +~~~~~~~~~~~~~~~~~ + +Interrupt running job. + ++-----------------------+------------------------------------------+ +| Argument | Description | ++=======================+==========================================+ +| ``-j``, ``--jid `` | Interrupt running job with id ```` | ++-----------------------+------------------------------------------+ + +Example: :: + + stop job --jid 1 + +Status Command +-------------- + +Status command will retrieve the last status of a job. + +Status Job Function +~~~~~~~~~~~~~~~~~~~ + +Retrieve last status for given job. + ++-----------------------+------------------------------------------+ +| Argument | Description | ++=======================+==========================================+ +| ``-j``, ``--jid `` | Retrieve status for job with id ```` | ++-----------------------+------------------------------------------+ + +Example: :: + + status job --jid 1 \ No newline at end of file Index: content/resources/docs/1.99.4/_sources/ConnectorDevelopment.txt =================================================================== --- content/resources/docs/1.99.4/_sources/ConnectorDevelopment.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/ConnectorDevelopment.txt (working copy) @@ -0,0 +1,456 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +============================= +Sqoop 2 Connector Development +============================= + +This document describes how to implement a connector in the Sqoop 2 using the code sample from one of the built-in connectors ( ``GenericJdbcConnector`` ) as a reference. Sqoop 2 jobs support extraction from and/or loading to different data sources. Sqoop 2 connectors encapsulate the job lifecyle operations for extracting and/or loading data from and/or to +different data sources. Each connector will primarily focus on a particular data source and its custom implementation for optimally reading and/or writing data in a distributed environment. + +.. contents:: + +What is a Sqoop Connector? +++++++++++++++++++++++++++ + +Connectors provide the facility to interact with many data sources and thus can be used as a means to transfer data between them in Sqoop. The connector implementation will provide logic to read from and/or write to a data source that it represents. For instance the ( ``GenericJdbcConnector`` ) encapsulates the logic to read from and/or write to jdbc enabled relational data sources. The connector part that enables reading from a data source and transferring this data to internal Sqoop format is called the FROM and the part that enables writng data to a data source by transferring data from Sqoop format is called TO. In order to interact with these data sources, the connector will provide one or many config classes and input fields within it. + +Broadly we support two main config types for connectors, link type represented by the enum ``ConfigType.LINK`` and job type represented by the enum ``ConfigType.JOB``. Link config represents the properties to physically connect to the data source. Job config represent the properties that are required to invoke reading from and/or writing to particular dataset in the data source it connects to. If a connector supports both reading from and writing to, it will provide the ``FromJobConfig`` and ``ToJobConfig`` objects. Each of these config objects are custom to each connector and can have one or more inputs associated with each of the Link, FromJob and ToJob config types. Hence we call the connectors as configurables i.e an entity that can provide configs for interacting with the data source it represents. As the connectors evolve over time to support new features in their data sources, the configs and inputs will change as well. Thus the connector API also provides methods for upgrading the config and input names and data related to these data sources across different versions. + +The connectors implement logic for various stages of the extract/load process using the connector API described below. While extracting/reading data from the data-source the main stages are ``Initializer``, ``Partitioner``, ``Extractor`` and ``Destroyer``. While loading/writitng data to the data source the main stages currently supported are ``Initializer``, ``Loader`` and ``Destroyer``. Each stage has its unique set of responsibilities that are explained in detail below. Since connectors understand the internals of the data source they represent, they work in tandem with the sqoop supported execution engines such as MapReduce or Spark (in future) to accomplish this process in a most optimal way. + +When do we add a new connector? +=============================== +You add a new connector when you need to extract/read data from a new data source, or load/write +data into a new data source that is not supported yet in Sqoop 2. +In addition to the connector API, Sqoop 2 also has an submission and execution engine interface. +At the moment the only supported engine is MapReduce, but we may support additional engines in the future such as Spark. Since many parallel execution engines are capable of reading/writing data, there may be a question of whether adding support for a new data source should be done through the connector or the execution engine API. + +**Our guideline are as follows:** Connectors should manage all data extract(reading) from and/or load(writing) into a data source. Submission and execution engine together manage the job submission and execution life cycle to read/write data from/to data sources in the most optimal way possible. If you need to support a new data store and details of linking to it and don't care how the process of reading/writing from/to happens then you are looking to add a connector and you should continue reading the below Connector API details to contribute new connectors to Sqoop 2. + + +Connector Implementation +++++++++++++++++++++++++ + +The ``SqoopConnector`` class defines an API for the connectors that must be implemented by the connector developers. Each Connector must extend ``SqoopConnector`` and override the methods shown below. +:: + + public abstract String getVersion(); + public abstract ResourceBundle getBundle(Locale locale); + public abstract Class getLinkConfigurationClass(); + public abstract Class getJobConfigurationClass(Direction direction); + public abstract From getFrom(); + public abstract To getTo(); + public abstract ConnectorConfigurableUpgrader getConfigurableUpgrader() + +Connectors can optionally override the following methods: +:: + + public List getSupportedDirections(); + public Class> getIntermediateDataFormat() + + +The ``getFrom`` method returns From_ instance +which is a ``Transferable`` entity that encapsulates the operations +needed to read from the data source that the connector represents. + +The ``getTo`` method returns To_ instance +which is a ``Transferable`` entity that encapsulates the operations +needed to write to the data source that the connector represents. + +Methods such as ``getBundle`` , ``getLinkConfigurationClass`` , ``getJobConfigurationClass`` +are related to `Configurations`_ + +Since a connector represents a data source and it can support one of the two directions, either reading FROM its data source or writing to its data souurce or both, the ``getSupportedDirections`` method returns a list of directions that a connector will implement. This should be a subset of the values in the ``Direction`` enum we provide: +:: + + public List getSupportedDirections() { + return Arrays.asList(new Direction[]{ + Direction.FROM, + Direction.TO + }); + } + + +From +==== + +The ``getFrom`` method returns From_ instance which is a ``Transferable`` entity that encapsulates the operations needed to read from the data source the connector represents. The built-in ``GenericJdbcConnector`` defines ``From`` like this. +:: + + private static final From FROM = new From( + GenericJdbcFromInitializer.class, + GenericJdbcPartitioner.class, + GenericJdbcExtractor.class, + GenericJdbcFromDestroyer.class); + ... + + @Override + public From getFrom() { + return FROM; + } + +Initializer and Destroyer +------------------------- +.. _Initializer: +.. _Destroyer: + +Initializer is instantiated before the submission of sqoop job to the execution engine and doing preparations such as connecting to the data source, creating temporary tables or adding dependent jar files. Initializers are executed as the first step in the sqoop job lifecyle. Here is the ``Initializer`` API. +:: + + public abstract void initialize(InitializerContext context, LinkConfiguration linkConfiguration, + JobConfiguration jobConfiguration); + + public List getJars(InitializerContext context, LinkConfiguration linkConfiguration, + JobConfiguration jobConfiguration); + + public abstract Schema getSchema(InitializerContext context, LinkConfiguration linkConfiguration, + JobConfiguration jobConfiguration); + +In addition to the initialize() method where the job execution preparation activities occur, the ``Initializer`` must also implement the getSchema() method for the direction it supports. The getSchema() method is used by the sqoop system to match the data extracted/read by the ``From`` instance of connector data source with the data loaded/written to the ``To`` instance of the connector data source. In case of a relational database or columnar database, the returned Schema object will include collection of columns with their data types. If the data source is schema-less, such as a file, an empty Schema can be returned (i.e a Schema object without any columns). + +NOTE: Sqoop 2 currently does not support extract and load between two connectors that represent schema-less data sources. We expect that atleast the ``From`` instance of the connector or the ``To`` instance of the connector in the sqoop job will have a schema. If both ``From`` and ``To`` have a associated non empty schema, Sqoop 2 will load data by column name, i.e, data in column "A" in ``From`` instance of the connector for the job will be loaded to column "A" in the ``To`` instance of the connector for that job. + + +``Destroyer`` is instantiated after the execution engine finishes its processing. It is the last step in the sqoop job lifecyle, so pending clean up tasks such as dropping temporary tables and closing connections. The term destroyer is a little misleading. It represents the phase where the final output commits to the data source can also happen in case of the ``TO`` instance of the connector code. + +Partitioner +----------- + +The ``Partitioner`` creates ``Partition`` instances ranging from 1..N. The N is driven by a configuration as well. The default set of partitions created is set to 10 in the sqoop code. Here is the ``Partitioner`` API + +``Partitioner`` must implement the ``getPartitions`` method in the ``Partitioner`` API. + +:: + + public abstract List getPartitions(PartitionerContext context, + LinkConfiguration linkConfiguration, FromJobConfiguration jobConfiguration); + +``Partition`` instances are passed to Extractor_ as the argument of ``extract`` method. +Extractor_ determines which portion of the data to extract by a given partition. + +There is no actual convention for Partition classes other than being actually ``Writable`` and ``toString()`` -able. Here is the ``Partition`` API +:: + + public abstract class Partition { + public abstract void readFields(DataInput in) throws IOException; + public abstract void write(DataOutput out) throws IOException; + public abstract String toString(); + } + +Connectors can implement custom ``Partition`` classes. ``GenericJdbcPartitioner`` is one such example. It returns the ``GenericJdbcPartition`` objects. + +Extractor +--------- + +Extractor (E for ETL) extracts data from a given data source +``Extractor`` must implement the ``extract`` method in the ``Extractor`` API. +:: + + public abstract void extract(ExtractorContext context, + LinkConfiguration linkConfiguration, + JobConfiguration jobConfiguration, + SqoopPartition partition); + +The ``extract`` method extracts data from the data source using the link and job configuration properties and writes it to the ``DataWriter`` (provided by the extractor context) as the default `Intermediate representation`_ . + +Extractors use Writer's provided by the ExtractorContext to send a record through the sqoop system. +:: + + context.getDataWriter().writeArrayRecord(array); + +The extractor must iterate through the given partition in the ``extract`` method. +:: + + while (resultSet.next()) { + ... + context.getDataWriter().writeArrayRecord(array); + ... + } + + +To +== + +The ``getTo`` method returns ``TO`` instance which is a ``Transferable`` entity that encapsulates the operations needed to wtite data to the data source the connector represents. The built-in ``GenericJdbcConnector`` defines ``To`` like this. +:: + + private static final To TO = new To( + GenericJdbcToInitializer.class, + GenericJdbcLoader.class, + GenericJdbcToDestroyer.class); + ... + + @Override + public To getTo() { + return TO; + } + + +Initializer and Destroyer +------------------------- + +Initializer_ and Destroyer_ of a ``To`` instance are used in a similar way to those of a ``From`` instance. +Refer to the previous section for more details. + + +Loader +------ + +A loader (L for ETL) receives data from the ``From`` instance of the sqoop connector associated with the sqoop job and then loads it to an ``TO`` instance of the connector associated with the same sqoop job + +``Loader`` must implement ``load`` method of the ``Loader`` API +:: + + public abstract void load(LoaderContext context, + ConnectionConfiguration connectionConfiguration, + JobConfiguration jobConfiguration) throws Exception; + +The ``load`` method reads data from ``DataReader`` (provided by context) in the default `Intermediate representation`_ and loads it to data source. + +Loader must iterate in the ``load`` method until the data from ``DataReader`` is exhausted. +:: + + while ((array = context.getDataReader().readArrayRecord()) != null) { + ... + } + +NOTE: we do not yet support a stage for connector developers to control how to balance the loading/writitng of data across the mutiple loaders. In future we may be adding this to the connector API to have custom logic to balance the loading across multiple reducers. + +Configurables ++++++++++++++ + +Configurable registration +========================= +One of the currently supported configurable in Sqoop are the connectors. Sqoop 2 registers definitions of connectors from the file named ``sqoopconnector.properties`` which each connector implementation should provide to become available in Sqoop. +:: + + # Generic JDBC Connector Properties + org.apache.sqoop.connector.class = org.apache.sqoop.connector.jdbc.GenericJdbcConnector + org.apache.sqoop.connector.name = generic-jdbc-connector + + +Configurations +============== + +Implementations of ``SqoopConnector`` overrides methods such as ``getLinkConfigurationClass`` and ``getJobConfigurationClass`` returning configuration class. +:: + + @Override + public Class getLinkConfigurationClass() { + return LinkConfiguration.class; + } + + @Override + public Class getJobConfigurationClass(Direction direction) { + switch (direction) { + case FROM: + return FromJobConfiguration.class; + case TO: + return ToJobConfiguration.class; + default: + return null; + } + } + +Configurations are represented by annotations defined in ``org.apache.sqoop.model`` package. +Annotations such as ``ConfigurationClass`` , ``ConfigClass`` , ``Config`` and ``Input`` +are provided for defining configuration objects for each connector. + +``@ConfigurationClass`` is a marker annotation for ``ConfigurationClasses`` that hold a group or lis of ``ConfigClasses`` annotated with the marker ``@ConfigClass`` +:: + + @ConfigurationClass + public class LinkConfiguration { + + @Config public LinkConfig linkConfig; + + public LinkConfiguration() { + linkConfig = new LinkConfig(); + } + } + +Each ``ConfigClass`` defines the different inputs it exposes for the link and job configs. These inputs are annotated with ``@Input`` and the user will be asked to fill in when they create a sqoop job and choose to use this instance of the connector for either the ``From`` or ``To`` part of the job. + +:: + + @ConfigClass(validators = {@Validator(LinkConfig.ConfigValidator.class)}) + public class LinkConfig { + @Input(size = 128, validators = {@Validator(NotEmpty.class), @Validator(ClassAvailable.class)} ) + @Input(size = 128) public String jdbcDriver; + @Input(size = 128) public String connectionString; + @Input(size = 40) public String username; + @Input(size = 40, sensitive = true) public String password; + @Input public Map jdbcProperties; + } + +Each ``ConfigClass`` and the inputs within the configs annotated with ``Input`` can specifiy validators via the ``@Validator`` annotation described below. + +Empty Configuration +------------------- +If a connector does not have any configuration inputs to specify for the ``ConfigType.LINK`` or ``ConfigType.JOB`` it is recommended to return the ``EmptyConfiguration`` class in the ``getLinkConfigurationClass()`` or ``getJobConfigurationClass(..)`` methods. +:: + + @ConfigurationClass + public class EmptyConfiguration { } + + +Configuration ResourceBundle +============================ + +The config and its corresponding input names, the input field description are represented in the config resource bundle defined per connector. +:: + + # jdbc driver + connection.jdbcDriver.label = JDBC Driver Class + connection.jdbcDriver.help = Enter the fully qualified class name of the JDBC \ + driver that will be used for establishing this connection. + + # connect string + connection.connectionString.label = JDBC Connection String + connection.connectionString.help = Enter the value of JDBC connection string to be \ + used by this connector for creating connections. + + ... + +Those resources are loaded by ``getBundle`` method of the ``SqoopConnector.`` +:: + + @Override + public ResourceBundle getBundle(Locale locale) { + return ResourceBundle.getBundle( + GenericJdbcConnectorConstants.RESOURCE_BUNDLE_NAME, locale); + } + + +Validations for Configs and Inputs +================================== + +Validators validate the config objects and the inputs associated with the config objects. For config objects themselves we encourage developers to write custom valdiators for both the link and job config types. + +:: + + @Input(size = 128, validators = {@Validator(value = StartsWith.class, strArg = "jdbc:")} ) + + @Input(size = 255, validators = { @Validator(NotEmpty.class) }) + +Sqoop 2 provides a list of standard input validators that can be used by different connectors for the link and job type configuration inputs. + +:: + + public class NotEmpty extends AbstractValidator { + @Override + public void validate(String instance) { + if (instance == null || instance.isEmpty()) { + addMessage(Status.ERROR, "Can't be null nor empty"); + } + } + } + +The validation logic is executed when users creating the sqoop jobs input values for the link and job configs associated with the ``From`` and ``To`` instances of the connectors associated with the job. + + +Sqoop 2 MapReduce Job Execution Lifecycle with Connector API +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +Sqoop 2 provides MapReduce utilities such as ``SqoopMapper`` and ``SqoopReducer`` that aid sqoop job execution. + +Note: Any class prefixed with Sqoop is a internal sqoop class provided for MapReduce and is not part of the conenector API. These internal classes work with the custom implementations of ``Extractor``, ``Partitioner`` in the ``From`` instance and ``Loader`` in the ``To`` instance of the connector. + +When reading from a data source, the ``Extractor`` provided by the ``From`` instance of the connector extracts data from a corresponding data source it represents and the ``Loader``, provided by the TO instance of the connector, loads data into the data source it represents. + +The diagram below describes the initialization phase of a job. +``SqoopInputFormat`` create splits using ``Partitioner``. +:: + + ,----------------. ,-----------. + |SqoopInputFormat| |Partitioner| + `-------+--------' `-----+-----' + getSplits | | + ----------->| | + | getPartitions | + |------------------------>| + | | ,---------. + | |-------> |Partition| + | | `----+----' + |<- - - - - - - - - - - - | | + | | | ,----------. + |-------------------------------------------------->|SqoopSplit| + | | | `----+-----' + +The diagram below describes the map phase of a job. +``SqoopMapper`` invokes ``From`` connector's extractor's ``extract`` method. +:: + + ,-----------. + |SqoopMapper| + `-----+-----' + run | + --------->| ,------------------. + |---------------------------------->|SqoopMapDataWriter| + | `------+-----------' + | ,---------. | + |--------------> |Extractor| | + | `----+----' | + | extract | | + |-------------------->| | + | | | + read from DB | | + <-------------------------------| write* | + | |------------------->| + | | | ,----. + | | |---------->|Data| + | | | `-+--' + | | | + | | | context.write + | | |--------------------------> + +The diagram below decribes the reduce phase of a job. +``OutputFormat`` invokes ``To`` connector's loader's ``load`` method (via ``SqoopOutputFormatLoadExecutor`` ). +:: + + ,------------. ,---------------------. + |SqoopReducer| |SqoopNullOutputFormat| + `---+--------' `----------+----------' + | | ,-----------------------------. + | |-> |SqoopOutputFormatLoadExecutor| + | | `--------------+--------------' ,----. + | | |---------------------> |Data| + | | | `-+--' + | | | ,-----------------. | + | | |-> |SqoopRecordWriter| | + getRecordWriter | | `--------+--------' | + ----------------------->| getRecordWriter | | | + | |----------------->| | | ,--------------. + | | |-----------------------------> |ConsumerThread| + | | | | | `------+-------' + | |<- - - - - - - - -| | | | ,------. + <- - - - - - - - - - - -| | | | |--->|Loader| + | | | | | | `--+---' + | | | | | | | + | | | | | | load | + run | | | | | |------>| + ----->| | write | | | | | + |------------------------------------------------>| setContent | | read* | + | | | |----------->| getContent |<------| + | | | | |<-----------| | + | | | | | | - - ->| + | | | | | | | write into DB + | | | | | | |--------------> + + + +.. _`Intermediate representation`: https://cwiki.apache.org/confluence/display/SQOOP/Sqoop2+Intermediate+representation Index: content/resources/docs/1.99.4/_sources/DevEnv.txt =================================================================== --- content/resources/docs/1.99.4/_sources/DevEnv.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/DevEnv.txt (working copy) @@ -0,0 +1,57 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +===================================== +Sqoop 2 Development Environment Setup +===================================== + +This document describes you how to setup development environment for Sqoop 2. + +System Requirement +================== + +Java +---- + +Sqoop written in Java and using version 1.6. You can `download java `_ and install. Locate JAVA_HOME to installed directroy, e.g. export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_32. + +Maven +----- + +Sqoop uses Maven 3 for building the project. Download `Maven `_ and its Installation instructions given in `link `_. + +Eclipse Setup +============= + +Steps for downloading source code is given in `Building Sqoop2 `_ + +Sqoop 2 project has multiple modules where one module is depend on another module for e.g. sqoop 2 client module has sqoop 2 common module dependency. Follow below step for creating eclipse's project and classpath for each module. + +:: + + //Install all package into local maven repository + mvn clean install -DskipTests + + //Adding M2_REPO variable to eclipse workspace + mvn eclipse:configure-workspace -Declipse.workspace= + + //Eclipse project creation with optional parameters + mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true + +Alternatively, for manually adding M2_REPO classpath variable as maven repository path in eclipse-> window-> Java ->Classpath Variables ->Click "New" ->In new dialog box, input Name as M2_REPO and Path as $HOME/.m2/repository ->click Ok. + +On successful execution of above maven commands, Then import the sqoop project modules into eclipse-> File -> Import ->General ->Existing Projects into Workspace-> Click Next-> Browse Sqoop 2 directory ($HOME/git/sqoop2) ->Click Ok ->Import dialog shows multiple projects (sqoop-client, sqoop-common, etc.) -> Select all modules -> click Finish. + Index: content/resources/docs/1.99.4/_sources/Installation.txt =================================================================== --- content/resources/docs/1.99.4/_sources/Installation.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/Installation.txt (working copy) @@ -0,0 +1,103 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +============ +Installation +============ + +Sqoop ships as one binary package however it's compound from two separate parts - client and server. You need to install server on single node in your cluster. This node will then serve as an entry point for all connecting Sqoop clients. Server acts as a mapreduce client and therefore Hadoop must be installed and configured on machine hosting Sqoop server. Clients can be installed on any arbitrary number of machines. Client is not acting as a mapreduce client and thus you do not need to install Hadoop on nodes that will act only as a Sqoop client. + +Server installation +=================== + +Copy Sqoop artifact on machine where you want to run Sqoop server. This machine must have installed and configured Hadoop. You don't need to run any Hadoop related services there, however the machine must be able to act as an Hadoop client. You should be able to list a HDFS for example: :: + + hadoop dfs -ls + +Sqoop server supports multiple Hadoop versions. However as Hadoop major versions are not compatible with each other, Sqoop have multiple binary artefacts - one for each supported major version of Hadoop. You need to make sure that you're using appropriated binary artifact for your specific Hadoop version. To install Sqoop server decompress appropriate distribution artifact in location at your convenience and change your working directory to this folder. :: + + # Decompress Sqoop distribution tarball + tar -xvf sqoop--bin-hadoop.tar.gz + + # Move decompressed content to any location + mv sqoop--bin-hadoop.tar.gz /usr/lib/sqoop + + # Change working directory + cd /usr/lib/sqoop + + +Installing Dependencies +----------------------- + +Hadoop libraries must be available on node where you are planning to run Sqoop server with proper configuration for major services - ``NameNode`` and either ``JobTracker`` or ``ResourceManager`` depending whether you are running Hadoop 1 or 2. There is no need to run any Hadoop service on the same node as Sqoop server, just the libraries and configuration files must be available. + +Path to Hadoop libraries is stored in file ``catalina.properties`` inside directory ``server/conf``. You need to change property called ``common.loader`` to contain all directories with your Hadoop libraries. The default expected locations are ``/usr/lib/hadoop`` and ``/usr/lib/hadoop/lib/``. Please check out the comments in the file for further description how to configure different locations. + +Lastly you might need to install JDBC drivers that are not bundled with Sqoop because of incompatible licenses. You can add any arbitrary Java jar file to Sqoop server by copying it into ``lib/`` directory. You can create this directory if it do not exists already. + +Configuring PATH +---------------- + +All user and administrator facing shell commands are stored in ``bin/`` directory. It's recommended to add this directory to your ``$PATH`` for their easier execution, for example:: + + PATH=$PATH:`pwd`/bin/ + +Further documentation pages will assume that you have the binaries on your ``$PATH``. You will need to call them specifying full path if you decide to skip this step. + +Configuring Server +------------------ + +Before starting server you should revise configuration to match your specific environment. Server configuration files are stored in ``server/config`` directory of distributed artifact along side with other configuration files of Tomcat. + +File ``sqoop_bootstrap.properties`` specifies which configuration provider should be used for loading configuration for rest of Sqoop server. Default value ``PropertiesConfigurationProvider`` should be sufficient. + + +Second configuration file ``sqoop.properties`` contains remaining configuration properties that can affect Sqoop server. File is very well documented, so check if all configuration properties fits your environment. Default or very little tweaking should be sufficient most common cases. + +You can verify the Sqoop server configuration using `Verify Tool `__, for example:: + + sqoop2-tool verify + +Upon running the ``verify`` tool, you should see messages similar to the following:: + + Verification was successful. + Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly + +Consult `Verify Tool `__ documentation page in case of any failure. + +Server Life Cycle +----------------- + +After installation and configuration you can start Sqoop server with following command: :: + + sqoop2-server start + +Similarly you can stop server using following command: :: + + sqoop2-server stop + +By default Sqoop server daemons use ports 12000 and 12001. You can set ``SQOOP_HTTP_PORT`` and ``SQOOP_ADMIN_PORT`` in configuration file ``server/bin/setenv.sh`` to use different ports. + +Client installation +=================== + +Client do not need extra installation and configuration steps. Just copy Sqoop distribution artifact on target machine and unzip it in desired location. You can start client with following command: :: + + sqoop2-shell + +You can find more documentation to Sqoop client in `Command Line Client `_ section. + + Index: content/resources/docs/1.99.4/_sources/RESTAPI.txt =================================================================== --- content/resources/docs/1.99.4/_sources/RESTAPI.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/RESTAPI.txt (working copy) @@ -0,0 +1,1441 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +========================= +Sqoop REST API Guide +========================= + +This document will explain how you can use Sqoop REST API to build applications interacting with Sqoop server. +The REST API covers all aspects of managing Sqoop jobs and allows you to build an app in any programming language using HTTP over JSON. + +.. contents:: Table of Contents + +Initialization +========================= + +Before continuing further, make sure that the Sqoop server is running. + +Then find out the details of the Sqoop server: ``host``, ``port`` and ``webapp``, and keep them in mind. Note that the sqoop server is running on Apache Tomcat. To exercise a REST API for Sqoop, you could assemble and send a HTTP request to an url corresponding to that API. Generally, the url contains the ``host`` on which the sqoop server is running, the ``port`` at which the sqoop server is listening to and ``webapp``, the context path at which the Sqoop server is registered in the Apache Tomcat engine. + +Certain requests might need to contain some additional query parameters and post data. These parameters could be given via +the HTTP headers, request body or both. All the content in the HTTP body is in ``JSON`` format. + +Understand Connector, Driver, Link and Job +=========================================================== + +To create and run a Sqoop Job, we need to provide config values for connecting to a data source and then processing the data in that data source. Processing might be either reading from or writing to the data source. Thus we have configurable entities such as the ``From`` and ``To`` parts of the connectors, the driver that each expose configs and one or more inputs within them. + +For instance a connector that represents a relational data source such as MySQL will expose config classes for connecting to the database. Some of the relevant inputs are the connection string, driver class, the username and the password to connect to the database. These configs remain the same to read data from any of the tables within that database. Hence they are grouped under ``LinkConfiguration``. + +Each connector can support Reading from a data source and/or writing/to a data source it represents. Reading from and writing to a data source are represented by From and To respectively. Specific configurations are required to peform the job of reading from or writing to the data source. These are grouped in the ``FromJobConfiguration`` and ``ToJobConfiguration`` objects of the connector. + +For instace, a connector that represents a relational data source such as MySQL will expose the table name to read from or the SQL query to use while reading data as a FromJobConfiguration. Similarly a connector that represents a data source such as HDFS, will expose the output directory to write to as a ToJobConfiguration. + + +Objects +============== + +This section covers all the objects that might exist in an API request and/or API response. + +Configs and Inputs +------------------ + +Before creating any link for a connector or a job with associated ``From`` and ``To`` links, the first thing to do is getting familiar with all the configurations that the connector exposes. + +Each config consists of the following information + ++------------------+---------------------------------------------------------+ +| Field | Description | ++==================+=========================================================+ +| ``id`` | The id of this config | ++------------------+---------------------------------------------------------+ +| ``inputs`` | A array of inputs of this config | ++------------------+---------------------------------------------------------+ +| ``name`` | The unique name of this config per connector | ++------------------+---------------------------------------------------------+ +| ``type`` | The type of this config (LINK/ JOB) | ++------------------+---------------------------------------------------------+ + +A typical config object is showing below: + +:: + + { + id:7, + inputs:[ + { + id: 25, + name: "throttlingConfig.numExtractors", + type: "INTEGER", + sensitive: false + }, + { + id: 26, + name: "throttlingConfig.numLoaders", + type: "INTEGER", + sensitive: false + } + ], + name: "throttlingConfig", + type: "JOB" + } + +Each input object in a config is structured below: + ++------------------+---------------------------------------------------------+ +| Field | Description | ++==================+=========================================================+ +| ``id`` | The id of this input | ++------------------+---------------------------------------------------------+ +| ``name`` | The unique name of this input per config | ++------------------+---------------------------------------------------------+ +| ``type`` | The data type of this input field | ++------------------+---------------------------------------------------------+ +| ``size`` | The length of this input field | ++------------------+---------------------------------------------------------+ +| ``sensitive`` | Whether this input contain sensitive information | ++------------------+---------------------------------------------------------+ + + +To send a filled config in the request, you should always use config id and input id to map the values to their correspondig names. +For example, the following request contains an input value ``com.mysql.jdbc.Driver`` with input id ``7`` inside a config with id ``4`` that belongs to a link with id ``3`` + +:: + + link: { + id: 3, + enabled: true, + link-config-values: [{ + id: 4, + inputs: [{ + id: 7, + name: "linkConfig.jdbcDriver", + value: "com.mysql.jdbc.Driver", + type: "STRING", + size: 128, + sensitive: false + }, { + id: 8, + name: "linkConfig.connectionString", + value: "jdbc%3Amysql%3A%2F%2Fmysql.ent.cloudera.com%2Fsqoop", + type: "STRING", + size: 128, + sensitive: false + }, + ... + } + } + +Exception Response +------------------ + +Each operation on Sqoop server might return an exception in the Http response. Remember to take this into account.The exception code and message could be found in both the header and body of the response. + +Please jump to "Header Parameters" section to find how to get exception information from header. + +In the body, the exception is expressed in ``JSON`` format. An example of the exception is: + +:: + + { + "message":"DERBYREPO_0030:Unable to load specific job metadata from repository - Couldn't find job with id 2", + "stack-trace":[ + { + "file":"DerbyRepositoryHandler.java", + "line":1111, + "class":"org.apache.sqoop.repository.derby.DerbyRepositoryHandler", + "method":"findJob" + }, + { + "file":"JdbcRepository.java", + "line":451, + "class":"org.apache.sqoop.repository.JdbcRepository$16", + "method":"doIt" + }, + { + "file":"JdbcRepository.java", + "line":90, + "class":"org.apache.sqoop.repository.JdbcRepository", + "method":"doWithConnection" + }, + { + "file":"JdbcRepository.java", + "line":61, + "class":"org.apache.sqoop.repository.JdbcRepository", + "method":"doWithConnection" + }, + { + "file":"JdbcRepository.java", + "line":448, + "class":"org.apache.sqoop.repository.JdbcRepository", + "method":"findJob" + }, + { + "file":"JobRequestHandler.java", + "line":238, + "class":"org.apache.sqoop.handler.JobRequestHandler", + "method":"getJobs" + } + ], + "class":"org.apache.sqoop.common.SqoopException" + } + +Config and Input Validation Status Response +-------------------------------------------- + +The config and the inputs associated with the connectors also provide custom validation rules for the values given to these input fields. Sqoop applies these custom validators and its corresponding valdation logic when config values for the LINK and JOB are posted. + + +An example of a OK status with the persisted ID: +:: + + { + "id": 3, + "validation-result": [ + {} + ] + } + +An example of ERROR status: +:: + + { + "validation-result": [ + { + "linkConfig": [ + { + "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp", + "status": "ERROR" + } + ] + } + ] + } + +Job Submission Status Response +------------------------------ + +After starting a job, you could look up the running status of it. There could be 7 possible status: + ++-----------------------------+---------------------------------------------------------+ +| Status | Description | ++=============================+=========================================================+ +| ``BOOTING`` | In the middle of submitting the job | ++-----------------------------+---------------------------------------------------------+ +| ``FAILURE_ON_SUBMIT`` | Unable to submit this job to remote cluster | ++-----------------------------+---------------------------------------------------------+ +| ``RUNNING`` | The job is running now | ++-----------------------------+---------------------------------------------------------+ +| ``SUCCEEDED`` | Job finished successfully | ++-----------------------------+---------------------------------------------------------+ +| ``FAILED`` | Job failed | ++-----------------------------+---------------------------------------------------------+ +| ``NEVER_EXECUTED`` | The job has never been executed since created | ++-----------------------------+---------------------------------------------------------+ +| ``UNKNOWN`` | The status is unknown | ++-----------------------------+---------------------------------------------------------+ + +Header Parameters +================= + +For all Sqoop requests, the following header parameters are supported: + ++---------------------------+----------+---------------------------------------------------------+ +| Parameter | Required | Description | ++===========================+==========+=========================================================+ +| ``sqoop-user-name`` | true | The name of the user who makes the requests | ++---------------------------+----------+---------------------------------------------------------+ + +For all the responses, the following parameters in the HTTP message header are available: + ++---------------------------+----------+------------------------------------------------------------------------------+ +| Parameter | Required | Description | ++===========================+==========+==============================================================================+ +| ``sqoop-error-code`` | false | The error code when some error happen in the server side for this request | ++---------------------------+----------+------------------------------------------------------------------------------+ +| ``sqoop-error-message`` | false | The explanation for a error code | ++---------------------------+----------+------------------------------------------------------------------------------+ + +So far, there are only these 2 parameters in the header of response message. They only exist when something bad happen in the server. +And they always come along with an exception message in the response body. + +REST APIs +========== + +The section elaborates all the rest apis that are supported by the Sqoop server. + +/version - [GET] - Get Sqoop Version +------------------------------------- + +Get all the version metadata of Sqoop software in the server side. + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Fields of Response: + ++--------------------+---------------------------------------------------------+ +| Field | Description | ++====================+=========================================================+ +| ``source-revision``| The revision number of Sqoop source code | ++--------------------+---------------------------------------------------------+ +| ``api-versions`` | The version of network protocol | ++--------------------+---------------------------------------------------------+ +| ``build-date`` | The Sqoop release date | ++--------------------+---------------------------------------------------------+ +| ``user`` | The user who made the release | ++--------------------+---------------------------------------------------------+ +| ``source-url`` | The url of the source code trunk | ++--------------------+---------------------------------------------------------+ +| ``build-version`` | The version of Sqoop in the server side | ++--------------------+---------------------------------------------------------+ + + +* Response Example: + +:: + + { + source-url: "git://vbasavaraj.local/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/common", + source-revision: "418c5f637c3f09b94ea7fc3b0a4610831373a25f", + build-version: "2.0.0-SNAPSHOT", + api-versions: [ + "v1" + ], + user: "vbasavaraj", + build-date: "Mon Nov 3 08:18:21 PST 2014" + } + +/v1/connectors - [GET] Get all Connectors +------------------------------------------- + +Get all the connectors registered in Sqoop + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Response Example + +:: + + { + connectors: [{ + id: 1, + link-config: [], + job-config: {}, + name: "hdfs-connector", + class: "org.apache.sqoop.connector.hdfs.HdfsConnector", + all-config-resources: {}, + version: "2.0.0-SNAPSHOT" + }, { + id: 2, + link-config: [], + job-config: {}, + name: "generic-jdbc-connector", + class: "org.apache.sqoop.connector.jdbc.GenericJdbcConnector", + all-config - resources: {}, + version: "2.0.0-SNAPSHOT" + }] + } + +/v1/connector/[cname] or /v1/connector/[cid] - [GET] - Get Connector +--------------------------------------------------------------------- + +Provide the id or unique name of the connector in the url ``[cid]`` or ``[cname]`` part. + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Fields of Response: + ++--------------------------+----------------------------------------------------------------------------------------+ +| Field | Description | ++==========================+========================================================================================+ +| ``id`` | The id for the connector ( registered as a configurable ) | ++--------------------------+----------------------------------------------------------------------------------------+ +| ``job-config`` | Connector job config and inputs for both FROM and TO | ++--------------------------+----------------------------------------------------------------------------------------+ +| ``link-config`` | Connector link config and inputs | ++--------------------------+----------------------------------------------------------------------------------------+ +| ``all-config-resources`` | All config inputs labels and description for the given connector | ++--------------------------+----------------------------------------------------------------------------------------+ +| ``version`` | The build version required for config and input data upgrades | ++--------------------------+----------------------------------------------------------------------------------------+ + +* Response Example: + +:: + + { + connector: { + id: 1, + job-config: { + TO: [{ + id: 3, + inputs: [{ + id: 3, + values: "TEXT_FILE,SEQUENCE_FILE", + name: "toJobConfig.outputFormat", + type: "ENUM", + sensitive: false + }, { + id: 4, + values: "NONE,DEFAULT,DEFLATE,GZIP,BZIP2,LZO,LZ4,SNAPPY,CUSTOM", + name: "toJobConfig.compression", + type: "ENUM", + sensitive: false + }, { + id: 5, + name: "toJobConfig.customCompression", + type: "STRING", + size: 255, + sensitive: false + }, { + id: 6, + name: "toJobConfig.outputDirectory", + type: "STRING", + size: 255, + sensitive: false + }], + name: "toJobConfig", + type: "JOB" + }], + FROM: [{ + id: 2, + inputs: [{ + id: 2, + name: "fromJobConfig.inputDirectory", + type: "STRING", + size: 255, + sensitive: false + }], + name: "fromJobConfig", + type: "JOB" + }] + }, + link-config: [{ + id: 1, + inputs: [{ + id: 1, + name: "linkConfig.uri", + type: "STRING", + size: 255, + sensitive: false + }], + name: "linkConfig", + type: "LINK" + }], + name: "hdfs-connector", + class: "org.apache.sqoop.connector.hdfs.HdfsConnector", + all-config-resources: { + fromJobConfig.label: "From Job configuration", + toJobConfig.ignored.label: "Ignored", + fromJobConfig.help: "Specifies information required to get data from Hadoop ecosystem", + toJobConfig.ignored.help: "This value is ignored", + toJobConfig.label: "ToJob configuration", + toJobConfig.storageType.label: "Storage type", + fromJobConfig.inputDirectory.label: "Input directory", + toJobConfig.outputFormat.label: "Output format", + toJobConfig.outputDirectory.label: "Output directory", + toJobConfig.outputDirectory.help: "Output directory for final data", + toJobConfig.compression.help: "Compression that should be used for the data", + toJobConfig.outputFormat.help: "Format in which data should be serialized", + toJobConfig.customCompression.label: "Custom compression format", + toJobConfig.compression.label: "Compression format", + linkConfig.label: "Link configuration", + toJobConfig.customCompression.help: "Full class name of the custom compression", + toJobConfig.storageType.help: "Target on Hadoop ecosystem where to store data", + linkConfig.help: "Here you supply information necessary to connect to HDFS", + linkConfig.uri.help: "HDFS URI used to connect to HDFS", + linkConfig.uri.label: "HDFS URI", + fromJobConfig.inputDirectory.help: "Directory that should be exported", + toJobConfig.help: "You must supply the information requested in order to get information where you want to store your data." + }, + version: "2.0.0-SNAPSHOT" + } + } + + +/v1/driver - [GET]- Get Sqoop Driver +----------------------------------------------- + +Driver exposes configurations required for the job execution. + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Fields of Response: + ++--------------------------+----------------------------------------------------------------------------------------------------+ +| Field | Description | ++==========================+====================================================================================================+ +| ``id`` | The id for the driver ( registered as a configurable ) | ++--------------------------+----------------------------------------------------------------------------------------------------+ +| ``job-config`` | Driver job config and inputs | ++--------------------------+----------------------------------------------------------------------------------------------------+ +| ``version`` | The build version of the driver | ++--------------------------+----------------------------------------------------------------------------------------------------+ +| ``all-config-resources`` | Driver exposed config and input labels and description | ++--------------------------+----------------------------------------------------------------------------------------------------+ + +* Response Example: + +:: + + { + id: 3, + job-config: [{ + id: 7, + inputs: [{ + id: 25, + name: "throttlingConfig.numExtractors", + type: "INTEGER", + sensitive: false + }, { + id: 26, + name: "throttlingConfig.numLoaders", + type: "INTEGER", + sensitive: false + }], + name: "throttlingConfig", + type: "JOB" + }], + all-config-resources: { + throttlingConfig.numExtractors.label: "Extractors", + throttlingConfig.numLoaders.help: "Number of loaders that Sqoop will use", + throttlingConfig.numLoaders.label: "Loaders", + throttlingConfig.label: "Throttling resources", + throttlingConfig.numExtractors.help: "Number of extractors that Sqoop will use", + throttlingConfig.help: "Set throttling boundaries to not overload your systems" + }, + version: "1" + } + +/v1/links/ - [GET] Get all links +------------------------------------------- + +Get all the links created in Sqoop + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Response Example + +:: + + { + links: [ + { + id: 1, + enabled: true, + update-user: "root", + link-config-values: [], + name: "First Link", + creation-date: 1415309361756, + connector-id: 1, + update-date: 1415309361756, + creation-user: "root" + }, + { + id: 2, + enabled: true, + update-user: "root", + link-config-values: [], + name: "Second Link", + creation-date: 1415309390807, + connector-id: 2, + update-date: 1415309390807, + creation-user: "root" + } + ] + } + + +/v1/links?cname=[cname] - [GET] Get all links by Connector +------------------------------------------------------------ +Get all the links for a given connector identified by ``[cname]`` part. + + +/v1/link/[lname] or /v1/link/[lid] - [GET] - Get Link +------------------------------------------------------------------------------- + +Provide the id or unique name of the link in the url ``[lid]`` or ``[lname]`` part. + +Get all the details of the link including the id, name, type and the corresponding config input values for the link + + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Response Example: + +:: + + { + link: { + id: 1, + enabled: true, + link-config-values: [{ + id: 1, + inputs: [{ + id: 1, + name: "linkConfig.uri", + value: "hdfs%3A%2F%2Fnamenode%3A8090", + type: "STRING", + size: 255, + sensitive: false + }], + name: "linkConfig", + type: "LINK" + }], + update-user: "root", + name: "First Link", + creation-date: 1415287846371, + connector-id: 1, + update-date: 1415287846371, + creation-user: "root" + } + } + +/v1/link - [POST] - Create Link +--------------------------------------------------------- + +Create a new link object. Provide values to the link config inputs for the ones that are required. + +* Method: ``POST`` +* Format: ``JSON`` +* Fields of Request: + ++--------------------------+--------------------------------------------------------------------------------------+ +| Field | Description | ++==========================+======================================================================================+ +| ``link`` | The root of the post data in JSON | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``id`` | The id of the link can be left blank in the post data | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``enabled`` | Whether to enable this link (true/false) | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``update-date`` | The last updated time of this link | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``creation-date`` | The creation time of this link | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``update-user`` | The user who updated this link | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``creation-user`` | The user who created this link | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``name`` | The name of this link | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``link-config-values`` | Config input values for link config for the corresponding connector | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``connector-id`` | The id of the connector used for this link | ++--------------------------+--------------------------------------------------------------------------------------+ + +* Request Example: + +:: + + { + link: { + id: -1, + enabled: true, + link-config-values: [{ + id: 1, + inputs: [{ + id: 1, + name: "linkConfig.uri", + value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1", + type: "STRING", + size: 255, + sensitive: false + }], + name: "testInput", + type: "LINK" + }], + update-user: "root", + name: "testLink", + creation-date: 1415202223048, + connector-id: 1, + update-date: 1415202223048, + creation-user: "root" + } + } + +* Fields of Response: + ++---------------------------+--------------------------------------------------------------------------------------+ +| Field | Description | ++===========================+======================================================================================+ +| ``id`` | The id assigned for this new created link | ++---------------------------+--------------------------------------------------------------------------------------+ +| ``validation-result`` | The validation status for the link config inputs given in the post data | ++---------------------------+--------------------------------------------------------------------------------------+ + +* ERROR Response Example: + +:: + + { + "validation-result": [ + { + "linkConfig": [ + { + "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp", + "status": "ERROR" + } + ] + } + ] + } + + +/v1/link/[lname] or /v1/link/[lid] - [PUT] - Update Link +--------------------------------------------------------- + +Update an existing link object with name [lname] or id [lid]. To make the procedure of filling inputs easier, the general practice +is get the link first and then change some of the values for the inputs. + +* Method: ``PUT`` +* Format: ``JSON`` + +* OK Response Example: + +:: + + { + "validation-result": [ + {} + ] + } + +/v1/link/[lname] or /v1/link/[lid] - [DELETE] - Delete Link +----------------------------------------------------------------- + +Delete a link with name [lname] or id [lid] + +* Method: ``DELETE`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``None`` + +/v1/link/[lid]/enable or /v1/link/[lname]/enable - [PUT] - Enable Link +-------------------------------------------------------------------------------- + +Enable a link with id ``lid`` or name ``lname`` + +* Method: ``PUT`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``None`` + +/v1/link/[lid]/disable - [PUT] - Disable Link +--------------------------------------------------------- + +Disable a link with id ``lid`` or name ``lname`` + +* Method: ``PUT`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``None`` + +/v1/jobs/ - [GET] Get all jobs +------------------------------------------- + +Get all the jobs created in Sqoop + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Response Example: + +:: + + { + jobs: [{ + driver-config-values: [], + enabled: true, + from-connector-id: 1, + update-user: "root", + to-config-values: [], + to-connector-id: 2, + creation-date: 1415310157618, + update-date: 1415310157618, + creation-user: "root", + id: 1, + to-link-id: 2, + from-config-values: [], + name: "First Job", + from-link-id: 1 + },{ + driver-config-values: [], + enabled: true, + from-connector-id: 2, + update-user: "root", + to-config-values: [], + to-connector-id: 1, + creation-date: 1415310650600, + update-date: 1415310650600, + creation-user: "root", + id: 2, + to-link-id: 1, + from-config-values: [], + name: "Second Job", + from-link-id: 2 + }] + } + +/v1/jobs?cname=[cname] - [GET] Get all jobs by connector +------------------------------------------------------------ +Get all the jobs for a given connector identified by ``[cname]`` part. + + +/v1/job/[jname] or /v1/job/[jid] - [GET] - Get Job +----------------------------------------------------- + +Provide the name or the id of the job in the url [jname] +part or [jid] part. + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` + +* Response Example: + +:: + + { + job: { + driver-config-values: [{ + id: 7, + inputs: [{ + id: 25, + name: "throttlingConfig.numExtractors", + value: "3", + type: "INTEGER", + sensitive: false + }, { + id: 26, + name: "throttlingConfig.numLoaders", + value: "3", + type: "INTEGER", + sensitive: false + }], + name: "throttlingConfig", + type: "JOB" + }], + enabled: true, + from-connector-id: 1, + update-user: "root", + to-config-values: [{ + id: 6, + inputs: [{ + id: 19, + name: "toJobConfig.schemaName", + type: "STRING", + size: 50, + sensitive: false + }, { + id: 20, + name: "toJobConfig.tableName", + value: "text", + type: "STRING", + size: 2000, + sensitive: false + }, { + id: 21, + name: "toJobConfig.sql", + type: "STRING", + size: 50, + sensitive: false + }, { + id: 22, + name: "toJobConfig.columns", + type: "STRING", + size: 50, + sensitive: false + }, { + id: 23, + name: "toJobConfig.stageTableName", + type: "STRING", + size: 2000, + sensitive: false + }, { + id: 24, + name: "toJobConfig.shouldClearStageTable", + type: "BOOLEAN", + sensitive: false + }], + name: "toJobConfig", + type: "JOB" + }], + to-connector-id: 2, + creation-date: 1415310157618, + update-date: 1415310157618, + creation-user: "root", + id: 1, + to-link-id: 2, + from-config-values: [{ + id: 2, + inputs: [{ + id: 2, + name: "fromJobConfig.inputDirectory", + value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1", + type: "STRING", + size: 255, + sensitive: false + }], + name: "fromJobConfig", + type: "JOB" + }], + name: "First Job", + from-link- id: 1 + } + } + + +/v1/job - [POST] - Create Job +--------------------------------------------------------- + +Create a new job object with the corresponding config values. + +* Method: ``POST`` +* Format: ``JSON`` + +* Fields of Request: + + ++--------------------------+--------------------------------------------------------------------------------------+ +| Field | Description | ++==========================+======================================================================================+ +| ``job`` | The root of the post data in JSON | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``from-link-id`` | The id of the from link for the job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``to-link-id`` | The id of the to link for the job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``id`` | The id of the link can be left blank in the post data | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``enabled`` | Whether to enable this job (true/false) | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``update-date`` | The last updated time of this job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``creation-date`` | The creation time of this job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``update-user`` | The user who updated this job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``creation-user`` | The uset who creates this job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``name`` | The name of this job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``from-config-values`` | Config input values for FROM part of the job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``to-config-values`` | Config input values for TO part of the job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``driver-config-values`` | Config input values for driver | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``connector-id`` | The id of the connector used for this link | ++--------------------------+--------------------------------------------------------------------------------------+ + + +* Request Example: + +:: + + { + job: { + driver-config-values: [ + { + id: 7, + inputs: [ + { + id: 25, + name: "throttlingConfig.numExtractors", + value: "3", + type: "INTEGER", + sensitive: false + }, + { + id: 26, + name: "throttlingConfig.numLoaders", + value: "3", + type: "INTEGER", + sensitive: false + } + ], + name: "throttlingConfig", + type: "JOB" + } + ], + enabled: true, + from-connector-id: 1, + update-user: "root", + to-config-values: [ + { + id: 6, + inputs: [ + { + id: 19, + name: "toJobConfig.schemaName", + type: "STRING", + size: 50, + sensitive: false + }, + { + id: 20, + name: "toJobConfig.tableName", + value: "text", + type: "STRING", + size: 2000, + sensitive: false + }, + { + id: 21, + name: "toJobConfig.sql", + type: "STRING", + size: 50, + sensitive: false + }, + { + id: 22, + name: "toJobConfig.columns", + type: "STRING", + size: 50, + sensitive: false + }, + { + id: 23, + name: "toJobConfig.stageTableName", + type: "STRING", + size: 2000, + sensitive: false + }, + { + id: 24, + name: "toJobConfig.shouldClearStageTable", + type: "BOOLEAN", + sensitive: false + } + ], + name: "toJobConfig", + type: "JOB" + } + ], + to-connector-id: 2, + creation-date: 1415310157618, + update-date: 1415310157618, + creation-user: "root", + id: -1, + to-link-id: 2, + from-config-values: [ + { + id: 2, + inputs: [ + { + id: 2, + name: "fromJobConfig.inputDirectory", + value: "hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1", + type: "STRING", + size: 255, + sensitive: false + } + ], + name: "fromJobConfig", + type: "JOB" + } + ], + name: "Test Job", + from-link-id: 1 + } + } + +* Fields of Response: + ++---------------------------+--------------------------------------------------------------------------------------+ +| Field | Description | ++===========================+======================================================================================+ +| ``id`` | The id assigned for this new created job | ++--------------------------+---------------------------------------------------------------------------------------+ +| ``validation-result`` | The validation status for the job config and driver config inputs in the post data | ++---------------------------+--------------------------------------------------------------------------------------+ + + +* ERROR Response Example: + +:: + + { + "validation-result": [ + { + "linkConfig": [ + { + "message": "Invalid URI. URI must either be null or a valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp", + "status": "ERROR" + } + ] + } + ] + } + + +/v1/job/[jid] - [PUT] - Update Job +--------------------------------------------------------- + +Update an existing job object with id [jid]. To make the procedure of filling inputs easier, the general practice +is get the existing job object first and then change some of the inputs. + +* Method: ``PUT`` +* Format: ``JSON`` + +The same as Create Job. + +* OK Response Example: + +:: + + { + "validation-result": [ + {} + ] + } + + +/v1/job/[jid] - [DELETE] - Delete Job +--------------------------------------------------------- + +Delete a job with id ``jid``. + +* Method: ``DELETE`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``None`` + +/v1/job/[jid]/enable - [PUT] - Enable Job +--------------------------------------------------------- + +Enable a job with id ``jid``. + +* Method: ``PUT`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``None`` + +/v1/job/[jid]/disable - [PUT] - Disable Job +--------------------------------------------------------- + +Disable a job with id ``jid``. + +* Method: ``PUT`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``None`` + + +/v1/job/[jid]/start or /v1/job/[jname]/start - [PUT]- Start Job +--------------------------------------------------------------------------------- + +Start a job with name ``[jname]`` or with id ``[jid]`` to trigger the job execution + +* Method: ``POST`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``Submission Record`` + +* BOOTING Response Example + +:: + + { + "submission": { + "progress": -1, + "last-update-date": 1415312531188, + "external-id": "job_1412137947693_0004", + "status": "BOOTING", + "job": 2, + "creation-date": 1415312531188, + "to-schema": { + "created": 1415312531426, + "name": "HDFS file", + "columns": [] + }, + "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/", + "from-schema": { + "created": 1415312531342, + "name": "text", + "columns": [ + { + "name": "id", + "nullable": true, + "unsigned": null, + "type": "FIXED_POINT", + "size": null + }, + { + "name": "txt", + "nullable": true, + "type": "TEXT", + "size": null + } + ] + } + } + } + +* SUCCEEDED Response Example + +:: + + { + submission: { + progress: -1, + last-update-date: 1415312809485, + external-id: "job_1412137947693_0004", + status: "SUCCEEDED", + job: 2, + creation-date: 1415312531188, + external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/", + counters: { + org.apache.hadoop.mapreduce.JobCounter: { + SLOTS_MILLIS_MAPS: 373553, + MB_MILLIS_MAPS: 382518272, + TOTAL_LAUNCHED_MAPS: 10, + MILLIS_MAPS: 373553, + VCORES_MILLIS_MAPS: 373553, + OTHER_LOCAL_MAPS: 10 + }, + org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: { + BYTES_WRITTEN: 0 + }, + org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: { + BYTES_READ: 0 + }, + org.apache.hadoop.mapreduce.TaskCounter: { + MAP_INPUT_RECORDS: 0, + MERGED_MAP_OUTPUTS: 0, + PHYSICAL_MEMORY_BYTES: 4065599488, + SPILLED_RECORDS: 0, + COMMITTED_HEAP_BYTES: 3439853568, + CPU_MILLISECONDS: 236900, + FAILED_SHUFFLE: 0, + VIRTUAL_MEMORY_BYTES: 15231422464, + SPLIT_RAW_BYTES: 1187, + MAP_OUTPUT_RECORDS: 1000000, + GC_TIME_MILLIS: 7282 + }, + org.apache.hadoop.mapreduce.FileSystemCounter: { + FILE_WRITE_OPS: 0, + FILE_READ_OPS: 0, + FILE_LARGE_READ_OPS: 0, + FILE_BYTES_READ: 0, + HDFS_BYTES_READ: 1187, + FILE_BYTES_WRITTEN: 1191230, + HDFS_LARGE_READ_OPS: 0, + HDFS_WRITE_OPS: 10, + HDFS_READ_OPS: 10, + HDFS_BYTES_WRITTEN: 276389736 + }, + org.apache.sqoop.submission.counter.SqoopCounters: { + ROWS_READ: 1000000 + } + } + } + } + + +* ERROR Response Example + +:: + + { + "submission": { + "progress": -1, + "last-update-date": 1415312390570, + "status": "FAILURE_ON_SUBMIT", + "exception": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run", + "job": 1, + "creation-date": 1415312390570, + "to-schema": { + "created": 1415312390797, + "name": "text", + "columns": [ + { + "name": "id", + "nullable": true, + "unsigned": null, + "type": "FIXED_POINT", + "size": null + }, + { + "name": "txt", + "nullable": true, + "type": "TEXT", + "size": null + } + ] + }, + "from-schema": { + "created": 1415312390778, + "name": "HDFS file", + "columns": [ + ] + }, + "exception-trace": "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_00" + } + } + +/v1/job/[jid]/stop or /v1/job/[jname]/stop - [PUT]- Stop Job +--------------------------------------------------------------------------------- + +Stop a job with name ``[janme]`` or with id ``[jid]`` to abort the running job. + +* Method: ``PUT`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``Submission Record`` + +/v1/job/[jid]/status or /v1/job/[jname]/status - [GET]- Get Job Status +--------------------------------------------------------------------------------- + +Get status of the running job with name ``[janme]`` or with id ``[jid]`` + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` +* Response Content: ``Submission Record`` + +:: + + { + "submission": { + "progress": 0.25, + "last-update-date": 1415312603838, + "external-id": "job_1412137947693_0004", + "status": "RUNNING", + "job": 2, + "creation-date": 1415312531188, + "external-link": "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/" + } + } + +/v1/submissions? - [GET] - Get all job Submissions +---------------------------------------------------------------------- + +Get all the submissions for every job started in SQoop + +/v1/submissions?jname=[jname] - [GET] - Get Submissions by Job +---------------------------------------------------------------------- + +Retrieve all job submissions in the past for the given job. Each submission record will have details such as the status, counters and urls for those submissions. + +Provide the name of the job in the url [jname] part. + +* Method: ``GET`` +* Format: ``JSON`` +* Request Content: ``None`` +* Fields of Response: + ++--------------------------+--------------------------------------------------------------------------------------+ +| Field | Description | ++==========================+======================================================================================+ +| ``progress`` | The progress of the running Sqoop job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``job`` | The id of the Sqoop job | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``creation-date`` | The submission timestamp | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``last-update-date`` | The timestamp of the last status update | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``status`` | The status of this job submission | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``external-id`` | The job id of Sqoop job running on Hadoop | ++--------------------------+--------------------------------------------------------------------------------------+ +| ``external-link`` | The link to track the job status on Hadoop | ++--------------------------+--------------------------------------------------------------------------------------+ + +* Response Example: + +:: + + { + submissions: [ + { + progress: -1, + last-update-date: 1415312809485, + external-id: "job_1412137947693_0004", + status: "SUCCEEDED", + job: 2, + creation-date: 1415312531188, + external-link: "http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/", + counters: { + org.apache.hadoop.mapreduce.JobCounter: { + SLOTS_MILLIS_MAPS: 373553, + MB_MILLIS_MAPS: 382518272, + TOTAL_LAUNCHED_MAPS: 10, + MILLIS_MAPS: 373553, + VCORES_MILLIS_MAPS: 373553, + OTHER_LOCAL_MAPS: 10 + }, + org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: { + BYTES_WRITTEN: 0 + }, + org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: { + BYTES_READ: 0 + }, + org.apache.hadoop.mapreduce.TaskCounter: { + MAP_INPUT_RECORDS: 0, + MERGED_MAP_OUTPUTS: 0, + PHYSICAL_MEMORY_BYTES: 4065599488, + SPILLED_RECORDS: 0, + COMMITTED_HEAP_BYTES: 3439853568, + CPU_MILLISECONDS: 236900, + FAILED_SHUFFLE: 0, + VIRTUAL_MEMORY_BYTES: 15231422464, + SPLIT_RAW_BYTES: 1187, + MAP_OUTPUT_RECORDS: 1000000, + GC_TIME_MILLIS: 7282 + }, + org.apache.hadoop.mapreduce.FileSystemCounter: { + FILE_WRITE_OPS: 0, + FILE_READ_OPS: 0, + FILE_LARGE_READ_OPS: 0, + FILE_BYTES_READ: 0, + HDFS_BYTES_READ: 1187, + FILE_BYTES_WRITTEN: 1191230, + HDFS_LARGE_READ_OPS: 0, + HDFS_WRITE_OPS: 10, + HDFS_READ_OPS: 10, + HDFS_BYTES_WRITTEN: 276389736 + }, + org.apache.sqoop.submission.counter.SqoopCounters: { + ROWS_READ: 1000000 + } + } + }, + { + progress: -1, + last-update-date: 1415312390570, + status: "FAILURE_ON_SUBMIT", + exception: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run", + job: 1, + creation-date: 1415312390570, + exception-trace: "org.apache.sqoop.common.SqoopException: GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner...." + } + ] + } \ No newline at end of file Index: content/resources/docs/1.99.4/_sources/Sqoop5MinutesDemo.txt =================================================================== --- content/resources/docs/1.99.4/_sources/Sqoop5MinutesDemo.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/Sqoop5MinutesDemo.txt (working copy) @@ -0,0 +1,222 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +==================== +Sqoop 5 Minutes Demo +==================== + +This page will walk you through the basic usage of Sqoop. You need to have installed and configured Sqoop server and client in order to follow this guide. Installation procedure is described on `Installation page `_. Please note that exact output shown in this page might differ from yours as Sqoop evolves. All major information should however remain the same. + +Sqoop uses unique names or persistent ids to identify connectors, links, jobs and configs. We support querying a entity by its unique name or by its perisent database Id. + +Starting Client +=============== + +Start client in interactive mode using following command: :: + + sqoop2-shell + +Configure client to use your Sqoop server: :: + + sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop + +Verify that connection is working by simple version checking: :: + + sqoop:000> show version --all + client version: + Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f + Compiled by vbasavaraj on Mon Nov 3 08:18:21 PST 2014 + server version: + Sqoop 2.0.0-SNAPSHOT source revision 418c5f637c3f09b94ea7fc3b0a4610831373a25f + Compiled by vbasavaraj on Mon Nov 3 08:18:21 PST 2014 + API versions: + [v1] + +You should received similar output as shown above describing the sqoop client build version, the server build version and the supported versions for the rest API. + +You can use the help command to check all the supported commands in the sqoop shell. + +:: + sqoop:000> help + For information about Sqoop, visit: http://sqoop.apache.org/ + + Available commands: + exit (\x ) Exit the shell + history (\H ) Display, manage and recall edit-line history + help (\h ) Display this help message + set (\st ) Configure various client options and settings + show (\sh ) Display various objects and configuration options + create (\cr ) Create new object in Sqoop repository + delete (\d ) Delete existing object in Sqoop repository + update (\up ) Update objects in Sqoop repository + clone (\cl ) Create new object based on existing one + start (\sta) Start job + stop (\stp) Stop job + status (\stu) Display status of a job + enable (\en ) Enable object in Sqoop repository + disable (\di ) Disable object in Sqoop repository + + +Creating Link Object +========================== + +Check for the registered connectors on your Sqoop server: :: + + sqoop:000> show connector --all + +----+------------------------+----------------+------------------------------------------------------+----------------------+ + | Id | Name | Version | Class | Supported Directions | + +----+------------------------+----------------+------------------------------------------------------+----------------------+ + | 1 | hdfs-connector | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.hdfs.HdfsConnector | FROM/TO | + | 2 | generic-jdbc-connector | 2.0.0-SNAPSHOT | org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO | + +----+------------------------+----------------+------------------------------------------------------+----------------------+ + +Our example contains two connectors. The one with connector Id 2 is called the ``generic-jdbc-connector``. This is a basic connector relying on the Java JDBC interface for communicating with data sources. It should work with the most common databases that are providing JDBC drivers. Please note that you must install JDBC drivers separately. They are not bundled in Sqoop due to incompatible licenses. + +Generic JDBC Connector in our example has a persistence Id 2 and we will use this value to create new link object for this connector. Note that the link name should be unique. +:: + + sqoop:000> create link --cid 2 + Creating link for connector with id 2 + Please fill following values to create new link object + Name: First Link + + Link configuration + JDBC Driver Class: com.mysql.jdbc.Driver + JDBC Connection String: jdbc:mysql://mysql.server/database + Username: sqoop + Password: ***** + JDBC Connection Properties: + There are currently 0 values in the map: + entry#protocol=tcp + New link was successfully created with validation status OK and persistent id 1 + +Our new link object was created with assigned id 1. + +In the ``show connector -all`` we see that there is a hdfs-connector registered in sqoop with the persistent id 1. Let us create another link object but this time for the hdfs-connector instead. + +:: + sqoop:000> create link --cid 1 + Creating link for connector with id 1 + Please fill following values to create new link object + Name: Second Link + + Link configuration + HDFS URI: hdfs://nameservice1:8020/ + New link was successfully created with validation status OK and persistent id 2 + +Creating Job Object +=================== + +Connectors implement the ``From`` for reading data from and/or ``To`` for writing data to. Generic JDBC Connector supports both of them List of supported directions for each connector might be seen in the output of ``show connector -all`` command above. In order to create a job we need to specifiy the ``From`` and ``To`` parts of the job uniquely identified by their link Ids. We already have 2 links created in the system, you can verify the same with the following command + +:: + sqoop:000> show links -all + 2 link(s) to show: + link with id 1 and name First Link (Enabled: true, Created by root at 11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM) + Using Connector id 2 + Link configuration + JDBC Driver Class: com.mysql.jdbc.Driver + JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop + Username: sqoop + Password: + JDBC Connection Properties: + protocol = tcp + link with id 2 and name Second Link (Enabled: true, Created by root at 11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM) + Using Connector id 1 + Link configuration + HDFS URI: hdfs://nameservice1:8020/ + +Next, we can use the two link Ids to associate the ``From`` and ``To`` for the job. +:: + + sqoop:000> create job -f 1 -t 2 + Creating job for links with from id 1 and to id 2 + Please fill following values to create new job object + Name: Sqoopy + + FromJob configuration + + Schema name:(Required)sqoop + Table name:(Required)sqoop + Table SQL statement:(Optional) + Table column names:(Optional) + Partition column name:(Optional) id + Null value allowed for the partition column:(Optional) + Boundary query:(Optional) + + ToJob configuration + + Output format: + 0 : TEXT_FILE + 1 : SEQUENCE_FILE + Output format: + 0 : TEXT_FILE + 1 : SEQUENCE_FILE + Choose: 0 + Compression format: + 0 : NONE + 1 : DEFAULT + 2 : DEFLATE + 3 : GZIP + 4 : BZIP2 + 5 : LZO + 6 : LZ4 + 7 : SNAPPY + 8 : CUSTOM + Choose: 0 + Custom compression format:(Optional) + Output directory:(Required)/root/projects/sqoop + + Driver Config + + Extractors: 2 + Loaders: 2 + New job was successfully created with validation status OK and persistent id 1 + +Our new job object was created with assigned id 1. + +Start Job ( a.k.a Data transfer ) +================================ + +You can start a sqoop job with the following command: :: + + sqoop:000> start job --jid 1 + Submission details + Job ID: 1 + Server URL: http://localhost:12000/sqoop/ + Created by: root + Creation date: 2014-11-04 19:43:29 PST + Lastly updated by: root + External ID: job_1412137947693_0001 + http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/ + 2014-11-04 19:43:29 PST: BOOTING - Progress is not available + +You can iteratively check your running job status with ``status job`` command: :: + + sqoop:000> status job --jid 1 + Submission details + Job ID: 1 + Server URL: http://localhost:12000/sqoop/ + Created by: root + Creation date: 2014-11-04 19:43:29 PST + Lastly updated by: root + External ID: job_1412137947693_0001 + http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/ + 2014-11-04 20:09:16 PST: RUNNING - 0.00 % + +And finally you can stop running the job at any time using ``stop job`` command: :: + + sqoop:000> stop job --jid 1 \ No newline at end of file Index: content/resources/docs/1.99.4/_sources/Tools.txt =================================================================== --- content/resources/docs/1.99.4/_sources/Tools.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/Tools.txt (working copy) @@ -0,0 +1,129 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +===== +Tools +===== + +Tools are server commands that administrators can execute on the Sqoop server machine in order to perform various maintenance tasks. The tool execution will always perform a given task and finish. There are no long running services implemented as tools. + +In order to perform the maintenance task each tool is suppose to do, they need to be executed in exactly the same environment as the main Sqoop server. The tool binary will take care of setting up the ``CLASSPATH`` and other environmental variables that might be required. However it's up to the administrator himself to run the tool under the same user as is used for the server. This is usually configured automatically for various Hadoop distributions (such as Apache Bigtop). + + +.. note:: Running tools while the Sqoop Server is also running is not recommended as it might lead to a data corruption and service disruption. + +List of available tools: + +* verify +* upgrade + +To run the desired tool, execute binary ``sqoop2-tool`` with desired tool name. For example to run ``verify`` tool:: + + sqoop2-tool verify + +.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools while Sqoop Server is running can lead to a data corruption and service disruption. + +Verify +====== + +The verify tool will verify Sqoop server configuration by starting all subsystems with the exception of servlets and tearing them down. + +To run the ``verify`` tool:: + + sqoop2-tool verify + +If the verification process succeeds, you should see messages like:: + + Verification was successful. + Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly + +If the verification process will find any inconsistencies, it will print out the following message instead:: + + Verification has failed, please check Server logs for further details. + Tool class org.apache.sqoop.tools.tool.VerifyTool has failed. + +Further details why the verification has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into. + +Upgrade +======= + +Upgrades all versionable components inside Sqoop2. This includes structural changes inside the repository and stored metadata. +Running this tool on Sqoop deployment that was already upgraded will have no effect. + +To run the ``upgrade`` tool:: + + sqoop2-tool upgrade + +Upon successful upgrade you should see following message:: + + Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly. + +Execution failure will show the following message instead:: + + Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed. + +Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into. + +RepositoryDump +============== + +Writes the user-created contents of the Sqoop repository to a file in JSON format. This includes connections, jobs and submissions. + +To run the ``repositorydump`` tool:: + + sqoop2-tool repositorydump -o repository.json + +As an option, the administrator can choose to include sensitive information such as database connection passwords in the file:: + + sqoop2-tool repositorydump -o repository.json --include-sensitive + +Upon successful execution, you should see the following message:: + + Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished correctly. + +If repository dump has failed, you will see the following message instead:: + + Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed. + +Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into. + +RepositoryLoad +============== + +Reads a json formatted file created by RepositoryDump and loads to current Sqoop repository. + +To run the ``repositoryLoad`` tool:: + + sqoop2-tool repositoryload -i repository.json + +Upon successful execution, you should see the following message:: + + Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished correctly. + +If repository load failed you will see the following message instead:: + + Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed. + +Or an exception. Further details why the upgrade process has failed will be available in the Sqoop server log - same file as the Sqoop Server logs into. + +.. note:: If the repository dump was created without passwords (default), the connections will not contain a password and the jobs will fail to execute. In that case you'll need to manually update the connections and set the password. +.. note:: RepositoryLoad tool will always generate new connections, jobs and submissions from the file. Even when an identical objects already exists in repository. + + + + + + Index: content/resources/docs/1.99.4/_sources/Upgrade.txt =================================================================== --- content/resources/docs/1.99.4/_sources/Upgrade.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/Upgrade.txt (working copy) @@ -0,0 +1,84 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +======= +Upgrade +======= + +This page describes procedure that you need to take in order to upgrade Sqoop from one release to a higher release. Upgrading both client and server component will be discussed separately. + +.. note:: Only updates from one Sqoop 2 release to another are covered, starting with upgrades from version 1.99.2. This guide do not contain general information how to upgrade from Sqoop 1 to Sqoop 2. + +Upgrading Server +================ + +As Sqoop server is using a database repository for persisting sqoop entities such as the connector, driver, links and jobs the repository schema might need to be updated as part of the server upgrade. In addition the configs and inputs described by the various connectors and the driver may also change with a new server version and might need a data upgrade. + +There are two ways how to upgrade Sqoop entities in the repository, you can either execute upgrade tool or configure the sqoop server to perform all necessary upgrades on start up. + +It's strongly advised to back up the repository before moving on to next steps. Backup instructions will vary depending on the repository implementation. For example, using MySQL as a repository will require a different back procedure than Apache Derby. Please follow the repositories' backup procedure. + +Upgrading Server using upgrade tool +----------------------------------- + +Preferred upgrade path is to explicitly run the `Upgrade Tool `_. First step is to however shutdown the server as having both the server and upgrade utility accessing the same repository might corrupt it:: + + sqoop2-server stop + +When the server has been successfully stopped, you can update the server bits and simply run the upgrade tool:: + + sqoop2-tool upgrade + +You should see that the upgrade process has been successful:: + + Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly. + +In case of any failure, please take a look into `Upgrade Tool `_ documentation page. + +Upgrading Server on start-up +---------------------------- + +The capability of performing the upgrade has been built-in to the server, however is disabled by default to avoid any unintentional changes to the repository. You can start the repository schema upgrade procedure by stopping the server: :: + + sqoop2-server stop + +Before starting the server again you will need to enable the auto-upgrade feature that will perform all necessary changes during Sqoop Server start up. + +You need to set the following property in configuration file ``sqoop.properties`` for the repository schema upgrade. +:: + + org.apache.sqoop.repository.schema.immutable=false + +You need to set the following property in configuration file ``sqoop.properties`` for the connector config data upgrade. +:: + + org.apache.sqoop.connector.autoupgrade=true + +You need to set the following property in configuration file ``sqoop.properties`` for the driver config data upgrade. +:: + + org.apache.sqoop.driver.autoupgrade=true + +When all properties are set, start the sqoop server using the following command:: + + sqoop2-server start + +All required actions will be performed automatically during the server bootstrap. It's strongly advised to set all three properties to their original values once the server has been successfully started and the upgrade has completed + +Upgrading Client +================ + +Client do not require any manual steps during upgrade. Replacing the binaries with updated version is sufficient. Index: content/resources/docs/1.99.4/_sources/index.txt =================================================================== --- content/resources/docs/1.99.4/_sources/index.txt (revision 0) +++ content/resources/docs/1.99.4/_sources/index.txt (working copy) @@ -0,0 +1,77 @@ +.. Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +======================================= +Apache Sqoop documentation +======================================= + +Apache Sqoop is a tool designed for efficiently transferring data betweeen structured, semi-structured and unstructured data sources. Relational databases are examples of structured data sources with well defined schema for the data they store. Cassandra, Hbase are examples of semi-structured data sources and HDFS is an example of unstructured data source that Sqoop can support. + +License +------- + +:: + + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + +User Guide +------------ +If you are excited to start using Sqoop you can follow the links below to get a quick overview of the system + +- `Sqoop 5 Minute Demo `_ +- `Command Line Shell Usage Guide `_ + +Developer Guide +----------------- + +If you are keen on contributing to Sqoop and get your hands dirty building connectors or interesting UI/applications for Sqoop internals check out the links below + +- `Building Sqoop 2 `_ +- `Sqoop Development Environment Setup `_ +- `Developing a Sqoop Connector with Connection API `_ +- `Developing Sqoop application with REST API `_ +- `Developing Sqoop application using Sqoop Java Client API `_ + + +Administrator Guide +-------------------- +If you are a admin trying to set up Sqoop, check out the links below + +- `Sqoop Server and Client Installation `_ +- `Sqoop Server Upgrade `_ +- `Sqoop Tools `_ + +Sqoop Project Details +--------------------- + +- `Download Apache Sqoop `_ +- `Sqoop Apache Wiki `_ +- `Sqoop Issue Tracking (JIRA) `_ +- `Sqoop Source Code `_ Index: content/resources/docs/1.99.4/_static/ajax-loader.gif =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/ajax-loader.gif =================================================================== --- content/resources/docs/1.99.4/_static/ajax-loader.gif (revision 1641479) +++ content/resources/docs/1.99.4/_static/ajax-loader.gif (working copy) Property changes on: content/resources/docs/1.99.4/_static/ajax-loader.gif ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/alert_info_32.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/alert_info_32.png =================================================================== --- content/resources/docs/1.99.4/_static/alert_info_32.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/alert_info_32.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/alert_info_32.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/alert_warning_32.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/alert_warning_32.png =================================================================== --- content/resources/docs/1.99.4/_static/alert_warning_32.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/alert_warning_32.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/alert_warning_32.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/basic.css =================================================================== --- content/resources/docs/1.99.4/_static/basic.css (revision 0) +++ content/resources/docs/1.99.4/_static/basic.css (working copy) @@ -0,0 +1,540 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar input[type="text"] { + width: 170px; +} + +div.sphinxsidebar input[type="submit"] { + width: 30px; +} + +img { + border: 0; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li div.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable dl, table.indextable dd { + margin-top: 0; + margin-bottom: 0; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- general body styles --------------------------------------------------- */ + +a.headerlink { + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.field-list ul { + padding-left: 1em; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px 7px 0 7px; + background-color: #ffe; + width: 40%; + float: right; +} + +p.sidebar-title { + font-weight: bold; +} + +/* -- topics ---------------------------------------------------------------- */ + +div.topic { + border: 1px solid #ccc; + padding: 7px 7px 0 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +div.admonition dl { + margin-bottom: 0; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + border: 0; + border-collapse: collapse; +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +table.field-list td, table.field-list th { + border: 0 !important; +} + +table.footnote td, table.footnote th { + border: 0 !important; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +dl { + margin-bottom: 15px; +} + +dd p { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +dt:target, .highlighted { + background-color: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.refcount { + color: #060; +} + +.optional { + font-size: 1.3em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +td.linenos pre { + padding: 5px 0px; + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + margin-left: 0.5em; +} + +table.highlighttable td { + padding: 0 0.5em 0 0.5em; +} + +tt.descname { + background-color: transparent; + font-weight: bold; + font-size: 1.2em; +} + +tt.descclassname { + background-color: transparent; +} + +tt.xref, a tt { + background-color: transparent; + font-weight: bold; +} + +h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file Index: content/resources/docs/1.99.4/_static/bg-page.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/bg-page.png =================================================================== --- content/resources/docs/1.99.4/_static/bg-page.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/bg-page.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/bg-page.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/bullet_orange.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/bullet_orange.png =================================================================== --- content/resources/docs/1.99.4/_static/bullet_orange.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/bullet_orange.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/bullet_orange.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/comment-bright.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/comment-bright.png =================================================================== --- content/resources/docs/1.99.4/_static/comment-bright.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/comment-bright.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/comment-bright.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/comment-close.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/comment-close.png =================================================================== --- content/resources/docs/1.99.4/_static/comment-close.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/comment-close.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/comment-close.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/comment.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/comment.png =================================================================== --- content/resources/docs/1.99.4/_static/comment.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/comment.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/comment.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/doctools.js =================================================================== --- content/resources/docs/1.99.4/_static/doctools.js (revision 0) +++ content/resources/docs/1.99.4/_static/doctools.js (working copy) @@ -0,0 +1,247 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for all documentation. + * + * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/** + * select a different prefix for underscore + */ +$u = _.noConflict(); + +/** + * make the code below compatible with browsers without + * an installed firebug like debugger +if (!window.console || !console.firebug) { + var names = ["log", "debug", "info", "warn", "error", "assert", "dir", + "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", + "profile", "profileEnd"]; + window.console = {}; + for (var i = 0; i < names.length; ++i) + window.console[names[i]] = function() {}; +} + */ + +/** + * small helper function to urldecode strings + */ +jQuery.urldecode = function(x) { + return decodeURIComponent(x).replace(/\+/g, ' '); +} + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s == 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * small function to check if an array contains + * a given item. + */ +jQuery.contains = function(arr, item) { + for (var i = 0; i < arr.length; i++) { + if (arr[i] == item) + return true; + } + return false; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node) { + if (node.nodeType == 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && !jQuery(node.parentNode).hasClass(className)) { + var span = document.createElement("span"); + span.className = className; + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this); + }); + } + } + return this.each(function() { + highlight(this); + }); +}; + +/** + * Small JavaScript module for the documentation. + */ +var Documentation = { + + init : function() { + this.fixFirefoxAnchorBug(); + this.highlightSearchWords(); + this.initIndexTable(); + }, + + /** + * i18n support + */ + TRANSLATIONS : {}, + PLURAL_EXPR : function(n) { return n == 1 ? 0 : 1; }, + LOCALE : 'unknown', + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext : function(string) { + var translated = Documentation.TRANSLATIONS[string]; + if (typeof translated == 'undefined') + return string; + return (typeof translated == 'string') ? translated : translated[0]; + }, + + ngettext : function(singular, plural, n) { + var translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated == 'undefined') + return (n == 1) ? singular : plural; + return translated[Documentation.PLURALEXPR(n)]; + }, + + addTranslations : function(catalog) { + for (var key in catalog.messages) + this.TRANSLATIONS[key] = catalog.messages[key]; + this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')'); + this.LOCALE = catalog.locale; + }, + + /** + * add context elements like header anchor links + */ + addContextElements : function() { + $('div[id] > :header:first').each(function() { + $('\u00B6'). + attr('href', '#' + this.id). + attr('title', _('Permalink to this headline')). + appendTo(this); + }); + $('dt[id]').each(function() { + $('\u00B6'). + attr('href', '#' + this.id). + attr('title', _('Permalink to this definition')). + appendTo(this); + }); + }, + + /** + * workaround a firefox stupidity + */ + fixFirefoxAnchorBug : function() { + if (document.location.hash && $.browser.mozilla) + window.setTimeout(function() { + document.location.href += ''; + }, 10); + }, + + /** + * highlight the search words provided in the url in the text + */ + highlightSearchWords : function() { + var params = $.getQueryParameters(); + var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : []; + if (terms.length) { + var body = $('div.body'); + window.setTimeout(function() { + $.each(terms, function() { + body.highlightText(this.toLowerCase(), 'highlighted'); + }); + }, 10); + $('') + .appendTo($('#searchbox')); + } + }, + + /** + * init the domain index toggle buttons + */ + initIndexTable : function() { + var togglers = $('img.toggler').click(function() { + var src = $(this).attr('src'); + var idnum = $(this).attr('id').substr(7); + $('tr.cg-' + idnum).toggle(); + if (src.substr(-9) == 'minus.png') + $(this).attr('src', src.substr(0, src.length-9) + 'plus.png'); + else + $(this).attr('src', src.substr(0, src.length-8) + 'minus.png'); + }).css('display', ''); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) { + togglers.click(); + } + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords : function() { + $('#searchbox .highlight-link').fadeOut(300); + $('span.highlighted').removeClass('highlighted'); + }, + + /** + * make the url absolute + */ + makeURL : function(relativeURL) { + return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL; + }, + + /** + * get the current relative url + */ + getCurrentURL : function() { + var path = document.location.pathname; + var parts = path.split(/\//); + $.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() { + if (this == '..') + parts.pop(); + }); + var url = parts.join('/'); + return path.substring(url.lastIndexOf('/') + 1, path.length - 1); + } +}; + +// quick alias for translations +_ = Documentation.gettext; + +$(document).ready(function() { + Documentation.init(); +}); Index: content/resources/docs/1.99.4/_static/down-pressed.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/down-pressed.png =================================================================== --- content/resources/docs/1.99.4/_static/down-pressed.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/down-pressed.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/down-pressed.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/down.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/down.png =================================================================== --- content/resources/docs/1.99.4/_static/down.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/down.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/down.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/file.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/file.png =================================================================== --- content/resources/docs/1.99.4/_static/file.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/file.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/file.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/haiku.css =================================================================== --- content/resources/docs/1.99.4/_static/haiku.css (revision 0) +++ content/resources/docs/1.99.4/_static/haiku.css (working copy) @@ -0,0 +1,371 @@ +/* + * haiku.css_t + * ~~~~~~~~~~~ + * + * Sphinx stylesheet -- haiku theme. + * + * Adapted from http://haiku-os.org/docs/Haiku-doc.css. + * Original copyright message: + * + * Copyright 2008-2009, Haiku. All rights reserved. + * Distributed under the terms of the MIT License. + * + * Authors: + * Francois Revol + * Stephan Assmus + * Braden Ewing + * Humdinger + * + * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +@import url("basic.css"); + +html { + margin: 0px; + padding: 0px; + background: #FFF url(bg-page.png) top left repeat-x; +} + +body { + line-height: 1.5; + margin: auto; + padding: 0px; + font-family: "DejaVu Sans", Arial, Helvetica, sans-serif; + min-width: 59em; + max-width: 70em; + color: #333333; +} + +div.footer { + padding: 8px; + font-size: 11px; + text-align: center; + letter-spacing: 0.5px; +} + +/* link colors and text decoration */ + +a:link { + font-weight: bold; + text-decoration: none; + color: #dc3c01; +} + +a:visited { + font-weight: bold; + text-decoration: none; + color: #892601; +} + +a:hover, a:active { + text-decoration: underline; + color: #ff4500; +} + +/* Some headers act as anchors, don't give them a hover effect */ + +h1 a:hover, a:active { + text-decoration: none; + color: #0c3762; +} + +h2 a:hover, a:active { + text-decoration: none; + color: #0c3762; +} + +h3 a:hover, a:active { + text-decoration: none; + color: #0c3762; +} + +h4 a:hover, a:active { + text-decoration: none; + color: #0c3762; +} + +a.headerlink { + color: #a7ce38; + padding-left: 5px; +} + +a.headerlink:hover { + color: #a7ce38; +} + +/* basic text elements */ + +div.content { + margin-top: 20px; + margin-left: 40px; + margin-right: 40px; + margin-bottom: 50px; + font-size: 0.9em; +} + +/* heading and navigation */ + +div.header { + position: relative; + left: 0px; + top: 0px; + height: 85px; + /* background: #eeeeee; */ + padding: 0 40px; +} +div.header h1 { + font-size: 1.6em; + font-weight: normal; + letter-spacing: 1px; + color: #0c3762; + border: 0; + margin: 0; + padding-top: 15px; +} +div.header h1 a { + font-weight: normal; + color: #0c3762; +} +div.header h2 { + font-size: 1.3em; + font-weight: normal; + letter-spacing: 1px; + text-transform: uppercase; + color: #aaa; + border: 0; + margin-top: -3px; + padding: 0; +} + +div.header img.rightlogo { + float: right; +} + + +div.title { + font-size: 1.3em; + font-weight: bold; + color: #0c3762; + border-bottom: dotted thin #e0e0e0; + margin-bottom: 25px; +} +div.topnav { + /* background: #e0e0e0; */ +} +div.topnav p { + margin-top: 0; + margin-left: 40px; + margin-right: 40px; + margin-bottom: 0px; + text-align: right; + font-size: 0.8em; +} +div.bottomnav { + background: #eeeeee; +} +div.bottomnav p { + margin-right: 40px; + text-align: right; + font-size: 0.8em; +} + +a.uplink { + font-weight: normal; +} + + +/* contents box */ + +table.index { + margin: 0px 0px 30px 30px; + padding: 1px; + border-width: 1px; + border-style: dotted; + border-color: #e0e0e0; +} +table.index tr.heading { + background-color: #e0e0e0; + text-align: center; + font-weight: bold; + font-size: 1.1em; +} +table.index tr.index { + background-color: #eeeeee; +} +table.index td { + padding: 5px 20px; +} + +table.index a:link, table.index a:visited { + font-weight: normal; + text-decoration: none; + color: #dc3c01; +} +table.index a:hover, table.index a:active { + text-decoration: underline; + color: #ff4500; +} + + +/* Haiku User Guide styles and layout */ + +/* Rounded corner boxes */ +/* Common declarations */ +div.admonition { + -webkit-border-radius: 10px; + -khtml-border-radius: 10px; + -moz-border-radius: 10px; + border-radius: 10px; + border-style: dotted; + border-width: thin; + border-color: #dcdcdc; + padding: 10px 15px 10px 15px; + margin-bottom: 15px; + margin-top: 15px; +} +div.note { + padding: 10px 15px 10px 80px; + background: #e4ffde url(alert_info_32.png) 15px 15px no-repeat; + min-height: 42px; +} +div.warning { + padding: 10px 15px 10px 80px; + background: #fffbc6 url(alert_warning_32.png) 15px 15px no-repeat; + min-height: 42px; +} +div.seealso { + background: #e4ffde; +} + +/* More layout and styles */ +h1 { + font-size: 1.3em; + font-weight: bold; + color: #0c3762; + border-bottom: dotted thin #e0e0e0; + margin-top: 30px; +} + +h2 { + font-size: 1.2em; + font-weight: normal; + color: #0c3762; + border-bottom: dotted thin #e0e0e0; + margin-top: 30px; +} + +h3 { + font-size: 1.1em; + font-weight: normal; + color: #0c3762; + margin-top: 30px; +} + +h4 { + font-size: 1.0em; + font-weight: normal; + color: #0c3762; + margin-top: 30px; +} + +p { + text-align: justify; +} + +p.last { + margin-bottom: 0; +} + +ol { + padding-left: 20px; +} + +ul { + padding-left: 5px; + margin-top: 3px; +} + +li { + line-height: 1.3; +} + +div.content ul > li { + -moz-background-clip:border; + -moz-background-inline-policy:continuous; + -moz-background-origin:padding; + background: transparent url(bullet_orange.png) no-repeat scroll left 0.45em; + list-style-image: none; + list-style-type: none; + padding: 0 0 0 1.666em; + margin-bottom: 3px; +} + +td { + vertical-align: top; +} + +tt { + background-color: #e2e2e2; + font-size: 1.0em; + font-family: monospace; +} + +pre { + border-color: #0c3762; + border-style: dotted; + border-width: thin; + margin: 0 0 12px 0; + padding: 0.8em; + background-color: #f0f0f0; +} + +hr { + border-top: 1px solid #ccc; + border-bottom: 0; + border-right: 0; + border-left: 0; + margin-bottom: 10px; + margin-top: 20px; +} + +/* printer only pretty stuff */ +@media print { + .noprint { + display: none; + } + /* for acronyms we want their definitions inlined at print time */ + acronym[title]:after { + font-size: small; + content: " (" attr(title) ")"; + font-style: italic; + } + /* and not have mozilla dotted underline */ + acronym { + border: none; + } + div.topnav, div.bottomnav, div.header, table.index { + display: none; + } + div.content { + margin: 0px; + padding: 0px; + } + html { + background: #FFF; + } +} + +.viewcode-back { + font-family: "DejaVu Sans", Arial, Helvetica, sans-serif; +} + +div.viewcode-block:target { + background-color: #f4debf; + border-top: 1px solid #ac9; + border-bottom: 1px solid #ac9; + margin: -1px -12px; + padding: 0 12px; +} \ No newline at end of file Index: content/resources/docs/1.99.4/_static/jquery.js =================================================================== --- content/resources/docs/1.99.4/_static/jquery.js (revision 0) +++ content/resources/docs/1.99.4/_static/jquery.js (working copy) @@ -0,0 +1,154 @@ +/*! + * jQuery JavaScript Library v1.4.2 + * http://jquery.com/ + * + * Copyright 2010, John Resig + * Dual licensed under the MIT or GPL Version 2 licenses. + * http://jquery.org/license + * + * Includes Sizzle.js + * http://sizzlejs.com/ + * Copyright 2010, The Dojo Foundation + * Released under the MIT, BSD, and GPL Licenses. + * + * Date: Sat Feb 13 22:33:48 2010 -0500 + */ +(function(A,w){function ma(){if(!c.isReady){try{s.documentElement.doScroll("left")}catch(a){setTimeout(ma,1);return}c.ready()}}function Qa(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}function X(a,b,d,f,e,j){var i=a.length;if(typeof b==="object"){for(var o in b)X(a,o,b[o],f,e,d);return a}if(d!==w){f=!j&&f&&c.isFunction(d);for(o=0;o)[^>]*$|^#([\w-]+)$/,Ua=/^.[^:#\[\.,]*$/,Va=/\S/, +Wa=/^(\s|\u00A0)+|(\s|\u00A0)+$/g,Xa=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,P=navigator.userAgent,xa=false,Q=[],L,$=Object.prototype.toString,aa=Object.prototype.hasOwnProperty,ba=Array.prototype.push,R=Array.prototype.slice,ya=Array.prototype.indexOf;c.fn=c.prototype={init:function(a,b){var d,f;if(!a)return this;if(a.nodeType){this.context=this[0]=a;this.length=1;return this}if(a==="body"&&!b){this.context=s;this[0]=s.body;this.selector="body";this.length=1;return this}if(typeof a==="string")if((d=Ta.exec(a))&& +(d[1]||!b))if(d[1]){f=b?b.ownerDocument||b:s;if(a=Xa.exec(a))if(c.isPlainObject(b)){a=[s.createElement(a[1])];c.fn.attr.call(a,b,true)}else a=[f.createElement(a[1])];else{a=sa([d[1]],[f]);a=(a.cacheable?a.fragment.cloneNode(true):a.fragment).childNodes}return c.merge(this,a)}else{if(b=s.getElementById(d[2])){if(b.id!==d[2])return T.find(a);this.length=1;this[0]=b}this.context=s;this.selector=a;return this}else if(!b&&/^\w+$/.test(a)){this.selector=a;this.context=s;a=s.getElementsByTagName(a);return c.merge(this, +a)}else return!b||b.jquery?(b||T).find(a):c(b).find(a);else if(c.isFunction(a))return T.ready(a);if(a.selector!==w){this.selector=a.selector;this.context=a.context}return c.makeArray(a,this)},selector:"",jquery:"1.4.2",length:0,size:function(){return this.length},toArray:function(){return R.call(this,0)},get:function(a){return a==null?this.toArray():a<0?this.slice(a)[0]:this[a]},pushStack:function(a,b,d){var f=c();c.isArray(a)?ba.apply(f,a):c.merge(f,a);f.prevObject=this;f.context=this.context;if(b=== +"find")f.selector=this.selector+(this.selector?" ":"")+d;else if(b)f.selector=this.selector+"."+b+"("+d+")";return f},each:function(a,b){return c.each(this,a,b)},ready:function(a){c.bindReady();if(c.isReady)a.call(s,c);else Q&&Q.push(a);return this},eq:function(a){return a===-1?this.slice(a):this.slice(a,+a+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(R.apply(this,arguments),"slice",R.call(arguments).join(","))},map:function(a){return this.pushStack(c.map(this, +function(b,d){return a.call(b,d,b)}))},end:function(){return this.prevObject||c(null)},push:ba,sort:[].sort,splice:[].splice};c.fn.init.prototype=c.fn;c.extend=c.fn.extend=function(){var a=arguments[0]||{},b=1,d=arguments.length,f=false,e,j,i,o;if(typeof a==="boolean"){f=a;a=arguments[1]||{};b=2}if(typeof a!=="object"&&!c.isFunction(a))a={};if(d===b){a=this;--b}for(;b
a"; +var e=d.getElementsByTagName("*"),j=d.getElementsByTagName("a")[0];if(!(!e||!e.length||!j)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(j.getAttribute("style")),hrefNormalized:j.getAttribute("href")==="/a",opacity:/^0.55$/.test(j.style.opacity),cssFloat:!!j.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:s.createElement("select").appendChild(s.createElement("option")).selected, +parentNode:d.removeChild(d.appendChild(s.createElement("div"))).parentNode===null,deleteExpando:true,checkClone:false,scriptEval:false,noCloneEvent:true,boxModel:null};b.type="text/javascript";try{b.appendChild(s.createTextNode("window."+f+"=1;"))}catch(i){}a.insertBefore(b,a.firstChild);if(A[f]){c.support.scriptEval=true;delete A[f]}try{delete b.test}catch(o){c.support.deleteExpando=false}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function k(){c.support.noCloneEvent= +false;d.detachEvent("onclick",k)});d.cloneNode(true).fireEvent("onclick")}d=s.createElement("div");d.innerHTML="";a=s.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var k=s.createElement("div");k.style.width=k.style.paddingLeft="1px";s.body.appendChild(k);c.boxModel=c.support.boxModel=k.offsetWidth===2;s.body.removeChild(k).style.display="none"});a=function(k){var n= +s.createElement("div");k="on"+k;var r=k in n;if(!r){n.setAttribute(k,"return;");r=typeof n[k]==="function"}return r};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=e=j=null}})();c.props={"for":"htmlFor","class":"className",readonly:"readOnly",maxlength:"maxLength",cellspacing:"cellSpacing",rowspan:"rowSpan",colspan:"colSpan",tabindex:"tabIndex",usemap:"useMap",frameborder:"frameBorder"};var G="jQuery"+J(),Ya=0,za={};c.extend({cache:{},expando:G,noData:{embed:true,object:true, +applet:true},data:function(a,b,d){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var f=a[G],e=c.cache;if(!f&&typeof b==="string"&&d===w)return null;f||(f=++Ya);if(typeof b==="object"){a[G]=f;e[f]=c.extend(true,{},b)}else if(!e[f]){a[G]=f;e[f]={}}a=e[f];if(d!==w)a[b]=d;return typeof b==="string"?a[b]:a}},removeData:function(a,b){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var d=a[G],f=c.cache,e=f[d];if(b){if(e){delete e[b];c.isEmptyObject(e)&&c.removeData(a)}}else{if(c.support.deleteExpando)delete a[c.expando]; +else a.removeAttribute&&a.removeAttribute(c.expando);delete f[d]}}}});c.fn.extend({data:function(a,b){if(typeof a==="undefined"&&this.length)return c.data(this[0]);else if(typeof a==="object")return this.each(function(){c.data(this,a)});var d=a.split(".");d[1]=d[1]?"."+d[1]:"";if(b===w){var f=this.triggerHandler("getData"+d[1]+"!",[d[0]]);if(f===w&&this.length)f=c.data(this[0],a);return f===w&&d[1]?this.data(d[0]):f}else return this.trigger("setData"+d[1]+"!",[d[0],b]).each(function(){c.data(this, +a,b)})},removeData:function(a){return this.each(function(){c.removeData(this,a)})}});c.extend({queue:function(a,b,d){if(a){b=(b||"fx")+"queue";var f=c.data(a,b);if(!d)return f||[];if(!f||c.isArray(d))f=c.data(a,b,c.makeArray(d));else f.push(d);return f}},dequeue:function(a,b){b=b||"fx";var d=c.queue(a,b),f=d.shift();if(f==="inprogress")f=d.shift();if(f){b==="fx"&&d.unshift("inprogress");f.call(a,function(){c.dequeue(a,b)})}}});c.fn.extend({queue:function(a,b){if(typeof a!=="string"){b=a;a="fx"}if(b=== +w)return c.queue(this[0],a);return this.each(function(){var d=c.queue(this,a,b);a==="fx"&&d[0]!=="inprogress"&&c.dequeue(this,a)})},dequeue:function(a){return this.each(function(){c.dequeue(this,a)})},delay:function(a,b){a=c.fx?c.fx.speeds[a]||a:a;b=b||"fx";return this.queue(b,function(){var d=this;setTimeout(function(){c.dequeue(d,b)},a)})},clearQueue:function(a){return this.queue(a||"fx",[])}});var Aa=/[\n\t]/g,ca=/\s+/,Za=/\r/g,$a=/href|src|style/,ab=/(button|input)/i,bb=/(button|input|object|select|textarea)/i, +cb=/^(a|area)$/i,Ba=/radio|checkbox/;c.fn.extend({attr:function(a,b){return X(this,a,b,true,c.attr)},removeAttr:function(a){return this.each(function(){c.attr(this,a,"");this.nodeType===1&&this.removeAttribute(a)})},addClass:function(a){if(c.isFunction(a))return this.each(function(n){var r=c(this);r.addClass(a.call(this,n,r.attr("class")))});if(a&&typeof a==="string")for(var b=(a||"").split(ca),d=0,f=this.length;d-1)return true;return false},val:function(a){if(a===w){var b=this[0];if(b){if(c.nodeName(b,"option"))return(b.attributes.value||{}).specified?b.value:b.text;if(c.nodeName(b,"select")){var d=b.selectedIndex,f=[],e=b.options;b=b.type==="select-one";if(d<0)return null;var j=b?d:0;for(d=b?d+1:e.length;j=0;else if(c.nodeName(this,"select")){var u=c.makeArray(r);c("option",this).each(function(){this.selected= +c.inArray(c(this).val(),u)>=0});if(!u.length)this.selectedIndex=-1}else this.value=r}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(a,b,d,f){if(!a||a.nodeType===3||a.nodeType===8)return w;if(f&&b in c.attrFn)return c(a)[b](d);f=a.nodeType!==1||!c.isXMLDoc(a);var e=d!==w;b=f&&c.props[b]||b;if(a.nodeType===1){var j=$a.test(b);if(b in a&&f&&!j){if(e){b==="type"&&ab.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed"); +a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&&b.specified?b.value:bb.test(a.nodeName)||cb.test(a.nodeName)&&a.href?0:w;return a[b]}if(!c.support.style&&f&&b==="style"){if(e)a.style.cssText=""+d;return a.style.cssText}e&&a.setAttribute(b,""+d);a=!c.support.hrefNormalized&&f&&j?a.getAttribute(b,2):a.getAttribute(b);return a===null?w:a}return c.style(a,b,d)}});var O=/\.(.*)$/,db=function(a){return a.replace(/[^\w\s\.\|`]/g, +function(b){return"\\"+b})};c.event={add:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){if(a.setInterval&&a!==A&&!a.frameElement)a=A;var e,j;if(d.handler){e=d;d=e.handler}if(!d.guid)d.guid=c.guid++;if(j=c.data(a)){var i=j.events=j.events||{},o=j.handle;if(!o)j.handle=o=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(o.elem,arguments):w};o.elem=a;b=b.split(" ");for(var k,n=0,r;k=b[n++];){j=e?c.extend({},e):{handler:d,data:f};if(k.indexOf(".")>-1){r=k.split("."); +k=r.shift();j.namespace=r.slice(0).sort().join(".")}else{r=[];j.namespace=""}j.type=k;j.guid=d.guid;var u=i[k],z=c.event.special[k]||{};if(!u){u=i[k]=[];if(!z.setup||z.setup.call(a,f,r,o)===false)if(a.addEventListener)a.addEventListener(k,o,false);else a.attachEvent&&a.attachEvent("on"+k,o)}if(z.add){z.add.call(a,j);if(!j.handler.guid)j.handler.guid=d.guid}u.push(j);c.event.global[k]=true}a=null}}},global:{},remove:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){var e,j=0,i,o,k,n,r,u,z=c.data(a), +C=z&&z.events;if(z&&C){if(b&&b.type){d=b.handler;b=b.type}if(!b||typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(e in C)c.event.remove(a,e+b)}else{for(b=b.split(" ");e=b[j++];){n=e;i=e.indexOf(".")<0;o=[];if(!i){o=e.split(".");e=o.shift();k=new RegExp("(^|\\.)"+c.map(o.slice(0).sort(),db).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(r=C[e])if(d){n=c.event.special[e]||{};for(B=f||0;B=0){a.type= +e=e.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[e]&&c.each(c.cache,function(){this.events&&this.events[e]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType===8)return w;a.result=w;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(f=c.data(d,"handle"))&&f.apply(d,b);f=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+e]&&d["on"+e].apply(d,b)===false)a.result=false}catch(j){}if(!a.isPropagationStopped()&& +f)c.event.trigger(a,b,f,true);else if(!a.isDefaultPrevented()){f=a.target;var i,o=c.nodeName(f,"a")&&e==="click",k=c.event.special[e]||{};if((!k._default||k._default.call(d,a)===false)&&!o&&!(f&&f.nodeName&&c.noData[f.nodeName.toLowerCase()])){try{if(f[e]){if(i=f["on"+e])f["on"+e]=null;c.event.triggered=true;f[e]()}}catch(n){}if(i)f["on"+e]=i;c.event.triggered=false}}},handle:function(a){var b,d,f,e;a=arguments[0]=c.event.fix(a||A.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive; +if(!b){d=a.type.split(".");a.type=d.shift();f=new RegExp("(^|\\.)"+d.slice(0).sort().join("\\.(?:.*\\.)?")+"(\\.|$)")}e=c.data(this,"events");d=e[a.type];if(e&&d){d=d.slice(0);e=0;for(var j=d.length;e-1?c.map(a.options,function(f){return f.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},fa=function(a,b){var d=a.target,f,e;if(!(!da.test(d.nodeName)||d.readOnly)){f=c.data(d,"_change_data");e=Fa(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data", +e);if(!(f===w||e===f))if(f!=null||e){a.type="change";return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:fa,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return fa.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return fa.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a, +"_change_data",Fa(a))}},setup:function(){if(this.type==="file")return false;for(var a in ea)c.event.add(this,a+".specialChange",ea[a]);return da.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return da.test(this.nodeName)}};ea=c.event.special.change.filters}s.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(f){f=c.event.fix(f);f.type=b;return c.event.handle.call(this,f)}c.event.special[b]={setup:function(){this.addEventListener(a, +d,true)},teardown:function(){this.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,f,e){if(typeof d==="object"){for(var j in d)this[b](j,f,d[j],e);return this}if(c.isFunction(f)){e=f;f=w}var i=b==="one"?c.proxy(e,function(k){c(this).unbind(k,i);return e.apply(this,arguments)}):e;if(d==="unload"&&b!=="one")this.one(d,f,e);else{j=0;for(var o=this.length;j0){y=t;break}}t=t[g]}m[q]=y}}}var f=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^[\]]*\]|['"][^'"]*['"]|[^[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g, +e=0,j=Object.prototype.toString,i=false,o=true;[0,0].sort(function(){o=false;return 0});var k=function(g,h,l,m){l=l||[];var q=h=h||s;if(h.nodeType!==1&&h.nodeType!==9)return[];if(!g||typeof g!=="string")return l;for(var p=[],v,t,y,S,H=true,M=x(h),I=g;(f.exec(""),v=f.exec(I))!==null;){I=v[3];p.push(v[1]);if(v[2]){S=v[3];break}}if(p.length>1&&r.exec(g))if(p.length===2&&n.relative[p[0]])t=ga(p[0]+p[1],h);else for(t=n.relative[p[0]]?[h]:k(p.shift(),h);p.length;){g=p.shift();if(n.relative[g])g+=p.shift(); +t=ga(g,t)}else{if(!m&&p.length>1&&h.nodeType===9&&!M&&n.match.ID.test(p[0])&&!n.match.ID.test(p[p.length-1])){v=k.find(p.shift(),h,M);h=v.expr?k.filter(v.expr,v.set)[0]:v.set[0]}if(h){v=m?{expr:p.pop(),set:z(m)}:k.find(p.pop(),p.length===1&&(p[0]==="~"||p[0]==="+")&&h.parentNode?h.parentNode:h,M);t=v.expr?k.filter(v.expr,v.set):v.set;if(p.length>0)y=z(t);else H=false;for(;p.length;){var D=p.pop();v=D;if(n.relative[D])v=p.pop();else D="";if(v==null)v=h;n.relative[D](y,v,M)}}else y=[]}y||(y=t);y||k.error(D|| +g);if(j.call(y)==="[object Array]")if(H)if(h&&h.nodeType===1)for(g=0;y[g]!=null;g++){if(y[g]&&(y[g]===true||y[g].nodeType===1&&E(h,y[g])))l.push(t[g])}else for(g=0;y[g]!=null;g++)y[g]&&y[g].nodeType===1&&l.push(t[g]);else l.push.apply(l,y);else z(y,l);if(S){k(S,q,l,m);k.uniqueSort(l)}return l};k.uniqueSort=function(g){if(B){i=o;g.sort(B);if(i)for(var h=1;h":function(g,h){var l=typeof h==="string";if(l&&!/\W/.test(h)){h=h.toLowerCase();for(var m=0,q=g.length;m=0))l||m.push(v);else if(l)h[p]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()}, +CHILD:function(g){if(g[1]==="nth"){var h=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=h[1]+(h[2]||1)-0;g[3]=h[3]-0}g[0]=e++;return g},ATTR:function(g,h,l,m,q,p){h=g[1].replace(/\\/g,"");if(!p&&n.attrMap[h])g[1]=n.attrMap[h];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,h,l,m,q){if(g[1]==="not")if((f.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=k(g[3],null,null,h);else{g=k.filter(g[3],h,l,true^q);l||m.push.apply(m, +g);return false}else if(n.match.POS.test(g[0])||n.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled===true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,h,l){return!!k(l[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)}, +text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"===g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}}, +setFilters:{first:function(g,h){return h===0},last:function(g,h,l,m){return h===m.length-1},even:function(g,h){return h%2===0},odd:function(g,h){return h%2===1},lt:function(g,h,l){return hl[3]-0},nth:function(g,h,l){return l[3]-0===h},eq:function(g,h,l){return l[3]-0===h}},filter:{PSEUDO:function(g,h,l,m){var q=h[1],p=n.filters[q];if(p)return p(g,l,h,m);else if(q==="contains")return(g.textContent||g.innerText||a([g])||"").indexOf(h[3])>=0;else if(q==="not"){h= +h[3];l=0;for(m=h.length;l=0}},ID:function(g,h){return g.nodeType===1&&g.getAttribute("id")===h},TAG:function(g,h){return h==="*"&&g.nodeType===1||g.nodeName.toLowerCase()===h},CLASS:function(g,h){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(h)>-1},ATTR:function(g,h){var l=h[1];g=n.attrHandle[l]?n.attrHandle[l](g):g[l]!=null?g[l]:g.getAttribute(l);l=g+"";var m=h[2];h=h[4];return g==null?m==="!=":m=== +"="?l===h:m==="*="?l.indexOf(h)>=0:m==="~="?(" "+l+" ").indexOf(h)>=0:!h?l&&g!==false:m==="!="?l!==h:m==="^="?l.indexOf(h)===0:m==="$="?l.substr(l.length-h.length)===h:m==="|="?l===h||l.substr(0,h.length+1)===h+"-":false},POS:function(g,h,l,m){var q=n.setFilters[h[2]];if(q)return q(g,l,h,m)}}},r=n.match.POS;for(var u in n.match){n.match[u]=new RegExp(n.match[u].source+/(?![^\[]*\])(?![^\(]*\))/.source);n.leftMatch[u]=new RegExp(/(^(?:.|\r|\n)*?)/.source+n.match[u].source.replace(/\\(\d+)/g,function(g, +h){return"\\"+(h-0+1)}))}var z=function(g,h){g=Array.prototype.slice.call(g,0);if(h){h.push.apply(h,g);return h}return g};try{Array.prototype.slice.call(s.documentElement.childNodes,0)}catch(C){z=function(g,h){h=h||[];if(j.call(g)==="[object Array]")Array.prototype.push.apply(h,g);else if(typeof g.length==="number")for(var l=0,m=g.length;l";var l=s.documentElement;l.insertBefore(g,l.firstChild);if(s.getElementById(h)){n.find.ID=function(m,q,p){if(typeof q.getElementById!=="undefined"&&!p)return(q=q.getElementById(m[1]))?q.id===m[1]||typeof q.getAttributeNode!=="undefined"&& +q.getAttributeNode("id").nodeValue===m[1]?[q]:w:[]};n.filter.ID=function(m,q){var p=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&p&&p.nodeValue===q}}l.removeChild(g);l=g=null})();(function(){var g=s.createElement("div");g.appendChild(s.createComment(""));if(g.getElementsByTagName("*").length>0)n.find.TAG=function(h,l){l=l.getElementsByTagName(h[1]);if(h[1]==="*"){h=[];for(var m=0;l[m];m++)l[m].nodeType===1&&h.push(l[m]);l=h}return l};g.innerHTML=""; +if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")n.attrHandle.href=function(h){return h.getAttribute("href",2)};g=null})();s.querySelectorAll&&function(){var g=k,h=s.createElement("div");h.innerHTML="

";if(!(h.querySelectorAll&&h.querySelectorAll(".TEST").length===0)){k=function(m,q,p,v){q=q||s;if(!v&&q.nodeType===9&&!x(q))try{return z(q.querySelectorAll(m),p)}catch(t){}return g(m,q,p,v)};for(var l in g)k[l]=g[l];h=null}}(); +(function(){var g=s.createElement("div");g.innerHTML="
";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){n.order.splice(1,0,"CLASS");n.find.CLASS=function(h,l,m){if(typeof l.getElementsByClassName!=="undefined"&&!m)return l.getElementsByClassName(h[1])};g=null}}})();var E=s.compareDocumentPosition?function(g,h){return!!(g.compareDocumentPosition(h)&16)}: +function(g,h){return g!==h&&(g.contains?g.contains(h):true)},x=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false},ga=function(g,h){var l=[],m="",q;for(h=h.nodeType?[h]:h;q=n.match.PSEUDO.exec(g);){m+=q[0];g=g.replace(n.match.PSEUDO,"")}g=n.relative[g]?g+"*":g;q=0;for(var p=h.length;q=0===d})};c.fn.extend({find:function(a){for(var b=this.pushStack("","find",a),d=0,f=0,e=this.length;f0)for(var j=d;j0},closest:function(a,b){if(c.isArray(a)){var d=[],f=this[0],e,j= +{},i;if(f&&a.length){e=0;for(var o=a.length;e-1:c(f).is(e)){d.push({selector:i,elem:f});delete j[i]}}f=f.parentNode}}return d}var k=c.expr.match.POS.test(a)?c(a,b||this.context):null;return this.map(function(n,r){for(;r&&r.ownerDocument&&r!==b;){if(k?k.index(r)>-1:c(r).is(a))return r;r=r.parentNode}return null})},index:function(a){if(!a||typeof a=== +"string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){a=typeof a==="string"?c(a,b||this.context):c.makeArray(a);b=c.merge(this.get(),a);return this.pushStack(qa(a[0])||qa(b[0])?b:c.unique(b))},andSelf:function(){return this.add(this.prevObject)}});c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode", +d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a,2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")? +a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a,b){c.fn[a]=function(d,f){var e=c.map(this,b,d);eb.test(a)||(f=d);if(f&&typeof f==="string")e=c.filter(f,e);e=this.length>1?c.unique(e):e;if((this.length>1||gb.test(f))&&fb.test(a))e=e.reverse();return this.pushStack(e,a,R.call(arguments).join(","))}});c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return c.find.matches(a,b)},dir:function(a,b,d){var f=[];for(a=a[b];a&&a.nodeType!==9&&(d===w||a.nodeType!==1||!c(a).is(d));){a.nodeType=== +1&&f.push(a);a=a[b]}return f},nth:function(a,b,d){b=b||1;for(var f=0;a;a=a[d])if(a.nodeType===1&&++f===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var Ja=/ jQuery\d+="(?:\d+|null)"/g,V=/^\s+/,Ka=/(<([\w:]+)[^>]*?)\/>/g,hb=/^(?:area|br|col|embed|hr|img|input|link|meta|param)$/i,La=/<([\w:]+)/,ib=/"},F={option:[1,""],legend:[1,"
","
"],thead:[1,"","
"],tr:[2,"","
"],td:[3,"","
"],col:[2,"","
"],area:[1,"",""],_default:[0,"",""]};F.optgroup=F.option;F.tbody=F.tfoot=F.colgroup=F.caption=F.thead;F.th=F.td;if(!c.support.htmlSerialize)F._default=[1,"div
","
"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d= +c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==w)return this.empty().append((this[0]&&this[0].ownerDocument||s).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this}, +wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})}, +prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b, +this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,f;(f=this[d])!=null;d++)if(!a||c.filter(a,[f]).length){if(!b&&f.nodeType===1){c.cleanData(f.getElementsByTagName("*"));c.cleanData([f])}f.parentNode&&f.parentNode.removeChild(f)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild); +return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,f=this.ownerDocument;if(!d){d=f.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(Ja,"").replace(/=([^="'>\s]+\/)>/g,'="$1">').replace(V,"")],f)[0]}else return this.cloneNode(true)});if(a===true){ra(this,b);ra(this.find("*"),b.find("*"))}return b},html:function(a){if(a===w)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(Ja, +""):null;else if(typeof a==="string"&&!ta.test(a)&&(c.support.leadingWhitespace||!V.test(a))&&!F[(La.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(Ka,Ma);try{for(var b=0,d=this.length;b0||e.cacheable||this.length>1?k.cloneNode(true):k)}o.length&&c.each(o,Qa)}return this}});c.fragments={};c.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var f=[];d=c(d);var e=this.length===1&&this[0].parentNode;if(e&&e.nodeType===11&&e.childNodes.length===1&&d.length===1){d[b](this[0]); +return this}else{e=0;for(var j=d.length;e0?this.clone(true):this).get();c.fn[b].apply(c(d[e]),i);f=f.concat(i)}return this.pushStack(f,a,d.selector)}}});c.extend({clean:function(a,b,d,f){b=b||s;if(typeof b.createElement==="undefined")b=b.ownerDocument||b[0]&&b[0].ownerDocument||s;for(var e=[],j=0,i;(i=a[j])!=null;j++){if(typeof i==="number")i+="";if(i){if(typeof i==="string"&&!jb.test(i))i=b.createTextNode(i);else if(typeof i==="string"){i=i.replace(Ka,Ma);var o=(La.exec(i)||["", +""])[1].toLowerCase(),k=F[o]||F._default,n=k[0],r=b.createElement("div");for(r.innerHTML=k[1]+i+k[2];n--;)r=r.lastChild;if(!c.support.tbody){n=ib.test(i);o=o==="table"&&!n?r.firstChild&&r.firstChild.childNodes:k[1]===""&&!n?r.childNodes:[];for(k=o.length-1;k>=0;--k)c.nodeName(o[k],"tbody")&&!o[k].childNodes.length&&o[k].parentNode.removeChild(o[k])}!c.support.leadingWhitespace&&V.test(i)&&r.insertBefore(b.createTextNode(V.exec(i)[0]),r.firstChild);i=r.childNodes}if(i.nodeType)e.push(i);else e= +c.merge(e,i)}}if(d)for(j=0;e[j];j++)if(f&&c.nodeName(e[j],"script")&&(!e[j].type||e[j].type.toLowerCase()==="text/javascript"))f.push(e[j].parentNode?e[j].parentNode.removeChild(e[j]):e[j]);else{e[j].nodeType===1&&e.splice.apply(e,[j+1,0].concat(c.makeArray(e[j].getElementsByTagName("script"))));d.appendChild(e[j])}return e},cleanData:function(a){for(var b,d,f=c.cache,e=c.event.special,j=c.support.deleteExpando,i=0,o;(o=a[i])!=null;i++)if(d=o[c.expando]){b=f[d];if(b.events)for(var k in b.events)e[k]? +c.event.remove(o,k):Ca(o,k,b.handle);if(j)delete o[c.expando];else o.removeAttribute&&o.removeAttribute(c.expando);delete f[d]}}});var kb=/z-?index|font-?weight|opacity|zoom|line-?height/i,Na=/alpha\([^)]*\)/,Oa=/opacity=([^)]*)/,ha=/float/i,ia=/-([a-z])/ig,lb=/([A-Z])/g,mb=/^-?\d+(?:px)?$/i,nb=/^-?\d/,ob={position:"absolute",visibility:"hidden",display:"block"},pb=["Left","Right"],qb=["Top","Bottom"],rb=s.defaultView&&s.defaultView.getComputedStyle,Pa=c.support.cssFloat?"cssFloat":"styleFloat",ja= +function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){return X(this,a,b,true,function(d,f,e){if(e===w)return c.curCSS(d,f);if(typeof e==="number"&&!kb.test(f))e+="px";c.style(d,f,e)})};c.extend({style:function(a,b,d){if(!a||a.nodeType===3||a.nodeType===8)return w;if((b==="width"||b==="height")&&parseFloat(d)<0)d=w;var f=a.style||a,e=d!==w;if(!c.support.opacity&&b==="opacity"){if(e){f.zoom=1;b=parseInt(d,10)+""==="NaN"?"":"alpha(opacity="+d*100+")";a=f.filter||c.curCSS(a,"filter")||"";f.filter= +Na.test(a)?a.replace(Na,b):b}return f.filter&&f.filter.indexOf("opacity=")>=0?parseFloat(Oa.exec(f.filter)[1])/100+"":""}if(ha.test(b))b=Pa;b=b.replace(ia,ja);if(e)f[b]=d;return f[b]},css:function(a,b,d,f){if(b==="width"||b==="height"){var e,j=b==="width"?pb:qb;function i(){e=b==="width"?a.offsetWidth:a.offsetHeight;f!=="border"&&c.each(j,function(){f||(e-=parseFloat(c.curCSS(a,"padding"+this,true))||0);if(f==="margin")e+=parseFloat(c.curCSS(a,"margin"+this,true))||0;else e-=parseFloat(c.curCSS(a, +"border"+this+"Width",true))||0})}a.offsetWidth!==0?i():c.swap(a,ob,i);return Math.max(0,Math.round(e))}return c.curCSS(a,b,d)},curCSS:function(a,b,d){var f,e=a.style;if(!c.support.opacity&&b==="opacity"&&a.currentStyle){f=Oa.test(a.currentStyle.filter||"")?parseFloat(RegExp.$1)/100+"":"";return f===""?"1":f}if(ha.test(b))b=Pa;if(!d&&e&&e[b])f=e[b];else if(rb){if(ha.test(b))b="float";b=b.replace(lb,"-$1").toLowerCase();e=a.ownerDocument.defaultView;if(!e)return null;if(a=e.getComputedStyle(a,null))f= +a.getPropertyValue(b);if(b==="opacity"&&f==="")f="1"}else if(a.currentStyle){d=b.replace(ia,ja);f=a.currentStyle[b]||a.currentStyle[d];if(!mb.test(f)&&nb.test(f)){b=e.left;var j=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;e.left=d==="fontSize"?"1em":f||0;f=e.pixelLeft+"px";e.left=b;a.runtimeStyle.left=j}}return f},swap:function(a,b,d){var f={};for(var e in b){f[e]=a.style[e];a.style[e]=b[e]}d.call(a);for(e in b)a.style[e]=f[e]}});if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b= +a.offsetWidth,d=a.offsetHeight,f=a.nodeName.toLowerCase()==="tr";return b===0&&d===0&&!f?true:b>0&&d>0&&!f?false:c.curCSS(a,"display")==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var sb=J(),tb=//gi,ub=/select|textarea/i,vb=/color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week/i,N=/=\?(&|$)/,ka=/\?/,wb=/(\?|&)_=.*?(&|$)/,xb=/^(\w+:)?\/\/([^\/?#]+)/,yb=/%20/g,zb=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!== +"string")return zb.call(this,a);else if(!this.length)return this;var f=a.indexOf(" ");if(f>=0){var e=a.slice(f,a.length);a=a.slice(0,f)}f="GET";if(b)if(c.isFunction(b)){d=b;b=null}else if(typeof b==="object"){b=c.param(b,c.ajaxSettings.traditional);f="POST"}var j=this;c.ajax({url:a,type:f,dataType:"html",data:b,complete:function(i,o){if(o==="success"||o==="notmodified")j.html(e?c("
").append(i.responseText.replace(tb,"")).find(e):i.responseText);d&&j.each(d,[i.responseText,o,i])}});return this}, +serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||ub.test(this.nodeName)||vb.test(this.type))}).map(function(a,b){a=c(this).val();return a==null?null:c.isArray(a)?c.map(a,function(d){return{name:b.name,value:d}}):{name:b.name,value:a}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "), +function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:f})},getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:f})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href, +global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:A.XMLHttpRequest&&(A.location.protocol!=="file:"||!A.ActiveXObject)?function(){return new A.XMLHttpRequest}:function(){try{return new A.ActiveXObject("Microsoft.XMLHTTP")}catch(a){}},accepts:{xml:"application/xml, text/xml",html:"text/html",script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},lastModified:{},etag:{},ajax:function(a){function b(){e.success&& +e.success.call(k,o,i,x);e.global&&f("ajaxSuccess",[x,e])}function d(){e.complete&&e.complete.call(k,x,i);e.global&&f("ajaxComplete",[x,e]);e.global&&!--c.active&&c.event.trigger("ajaxStop")}function f(q,p){(e.context?c(e.context):c.event).trigger(q,p)}var e=c.extend(true,{},c.ajaxSettings,a),j,i,o,k=a&&a.context||e,n=e.type.toUpperCase();if(e.data&&e.processData&&typeof e.data!=="string")e.data=c.param(e.data,e.traditional);if(e.dataType==="jsonp"){if(n==="GET")N.test(e.url)||(e.url+=(ka.test(e.url)? +"&":"?")+(e.jsonp||"callback")+"=?");else if(!e.data||!N.test(e.data))e.data=(e.data?e.data+"&":"")+(e.jsonp||"callback")+"=?";e.dataType="json"}if(e.dataType==="json"&&(e.data&&N.test(e.data)||N.test(e.url))){j=e.jsonpCallback||"jsonp"+sb++;if(e.data)e.data=(e.data+"").replace(N,"="+j+"$1");e.url=e.url.replace(N,"="+j+"$1");e.dataType="script";A[j]=A[j]||function(q){o=q;b();d();A[j]=w;try{delete A[j]}catch(p){}z&&z.removeChild(C)}}if(e.dataType==="script"&&e.cache===null)e.cache=false;if(e.cache=== +false&&n==="GET"){var r=J(),u=e.url.replace(wb,"$1_="+r+"$2");e.url=u+(u===e.url?(ka.test(e.url)?"&":"?")+"_="+r:"")}if(e.data&&n==="GET")e.url+=(ka.test(e.url)?"&":"?")+e.data;e.global&&!c.active++&&c.event.trigger("ajaxStart");r=(r=xb.exec(e.url))&&(r[1]&&r[1]!==location.protocol||r[2]!==location.host);if(e.dataType==="script"&&n==="GET"&&r){var z=s.getElementsByTagName("head")[0]||s.documentElement,C=s.createElement("script");C.src=e.url;if(e.scriptCharset)C.charset=e.scriptCharset;if(!j){var B= +false;C.onload=C.onreadystatechange=function(){if(!B&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){B=true;b();d();C.onload=C.onreadystatechange=null;z&&C.parentNode&&z.removeChild(C)}}}z.insertBefore(C,z.firstChild);return w}var E=false,x=e.xhr();if(x){e.username?x.open(n,e.url,e.async,e.username,e.password):x.open(n,e.url,e.async);try{if(e.data||a&&a.contentType)x.setRequestHeader("Content-Type",e.contentType);if(e.ifModified){c.lastModified[e.url]&&x.setRequestHeader("If-Modified-Since", +c.lastModified[e.url]);c.etag[e.url]&&x.setRequestHeader("If-None-Match",c.etag[e.url])}r||x.setRequestHeader("X-Requested-With","XMLHttpRequest");x.setRequestHeader("Accept",e.dataType&&e.accepts[e.dataType]?e.accepts[e.dataType]+", */*":e.accepts._default)}catch(ga){}if(e.beforeSend&&e.beforeSend.call(k,x,e)===false){e.global&&!--c.active&&c.event.trigger("ajaxStop");x.abort();return false}e.global&&f("ajaxSend",[x,e]);var g=x.onreadystatechange=function(q){if(!x||x.readyState===0||q==="abort"){E|| +d();E=true;if(x)x.onreadystatechange=c.noop}else if(!E&&x&&(x.readyState===4||q==="timeout")){E=true;x.onreadystatechange=c.noop;i=q==="timeout"?"timeout":!c.httpSuccess(x)?"error":e.ifModified&&c.httpNotModified(x,e.url)?"notmodified":"success";var p;if(i==="success")try{o=c.httpData(x,e.dataType,e)}catch(v){i="parsererror";p=v}if(i==="success"||i==="notmodified")j||b();else c.handleError(e,x,i,p);d();q==="timeout"&&x.abort();if(e.async)x=null}};try{var h=x.abort;x.abort=function(){x&&h.call(x); +g("abort")}}catch(l){}e.async&&e.timeout>0&&setTimeout(function(){x&&!E&&g("timeout")},e.timeout);try{x.send(n==="POST"||n==="PUT"||n==="DELETE"?e.data:null)}catch(m){c.handleError(e,x,null,m);d()}e.async||g();return x}},handleError:function(a,b,d,f){if(a.error)a.error.call(a.context||a,b,d,f);if(a.global)(a.context?c(a.context):c.event).trigger("ajaxError",[b,a,f])},active:0,httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status=== +1223||a.status===0}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"),f=a.getResponseHeader("Etag");if(d)c.lastModified[b]=d;if(f)c.etag[b]=f;return a.status===304||a.status===0},httpData:function(a,b,d){var f=a.getResponseHeader("content-type")||"",e=b==="xml"||!b&&f.indexOf("xml")>=0;a=e?a.responseXML:a.responseText;e&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b=== +"json"||!b&&f.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&f.indexOf("javascript")>=0)c.globalEval(a);return a},param:function(a,b){function d(i,o){if(c.isArray(o))c.each(o,function(k,n){b||/\[\]$/.test(i)?f(i,n):d(i+"["+(typeof n==="object"||c.isArray(n)?k:"")+"]",n)});else!b&&o!=null&&typeof o==="object"?c.each(o,function(k,n){d(i+"["+k+"]",n)}):f(i,o)}function f(i,o){o=c.isFunction(o)?o():o;e[e.length]=encodeURIComponent(i)+"="+encodeURIComponent(o)}var e=[];if(b===w)b=c.ajaxSettings.traditional; +if(c.isArray(a)||a.jquery)c.each(a,function(){f(this.name,this.value)});else for(var j in a)d(j,a[j]);return e.join("&").replace(yb,"+")}});var la={},Ab=/toggle|show|hide/,Bb=/^([+-]=)?([\d+-.]+)(.*)$/,W,va=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b){if(a||a===0)return this.animate(K("show",3),a,b);else{a=0;for(b=this.length;a").appendTo("body");f=e.css("display");if(f==="none")f="block";e.remove();la[d]=f}c.data(this[a],"olddisplay",f)}}a=0;for(b=this.length;a=0;f--)if(d[f].elem===this){b&&d[f](true);d.splice(f,1)}});b||this.dequeue();return this}});c.each({slideDown:K("show",1),slideUp:K("hide",1),slideToggle:K("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"}},function(a,b){c.fn[a]=function(d,f){return this.animate(b,d,f)}});c.extend({speed:function(a,b,d){var f=a&&typeof a==="object"?a:{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};f.duration=c.fx.off?0:typeof f.duration=== +"number"?f.duration:c.fx.speeds[f.duration]||c.fx.speeds._default;f.old=f.complete;f.complete=function(){f.queue!==false&&c(this).dequeue();c.isFunction(f.old)&&f.old.call(this)};return f},easing:{linear:function(a,b,d,f){return d+f*a},swing:function(a,b,d,f){return(-Math.cos(a*Math.PI)/2+0.5)*f+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]|| +c.fx.step._default)(this);if((this.prop==="height"||this.prop==="width")&&this.elem.style)this.elem.style.display="block"},cur:function(a){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];return(a=parseFloat(c.css(this.elem,this.prop,a)))&&a>-10000?a:parseFloat(c.curCSS(this.elem,this.prop))||0},custom:function(a,b,d){function f(j){return e.step(j)}this.startTime=J();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start; +this.pos=this.state=0;var e=this;f.elem=this.elem;if(f()&&c.timers.push(f)&&!W)W=setInterval(c.fx.tick,13)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true;this.custom(this.cur(),0)},step:function(a){var b=J(),d=true;if(a||b>=this.options.duration+this.startTime){this.now= +this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var f in this.options.curAnim)if(this.options.curAnim[f]!==true)d=false;if(d){if(this.options.display!=null){this.elem.style.overflow=this.options.overflow;a=c.data(this.elem,"olddisplay");this.elem.style.display=a?a:this.options.display;if(c.css(this.elem,"display")==="none")this.elem.style.display="block"}this.options.hide&&c(this.elem).hide();if(this.options.hide||this.options.show)for(var e in this.options.curAnim)c.style(this.elem, +e,this.options.orig[e]);this.options.complete.call(this.elem)}return false}else{e=b-this.startTime;this.state=e/this.options.duration;a=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||a](this.state,e,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a=c.timers,b=0;b
"; +a.insertBefore(b,a.firstChild);d=b.firstChild;f=d.firstChild;e=d.nextSibling.firstChild.firstChild;this.doesNotAddBorder=f.offsetTop!==5;this.doesAddBorderForTableAndCells=e.offsetTop===5;f.style.position="fixed";f.style.top="20px";this.supportsFixedPosition=f.offsetTop===20||f.offsetTop===15;f.style.position=f.style.top="";d.style.overflow="hidden";d.style.position="relative";this.subtractsBorderForOverflowNotVisible=f.offsetTop===-5;this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==j;a.removeChild(b); +c.offset.initialize=c.noop},bodyOffset:function(a){var b=a.offsetTop,d=a.offsetLeft;c.offset.initialize();if(c.offset.doesNotIncludeMarginInBodyOffset){b+=parseFloat(c.curCSS(a,"marginTop",true))||0;d+=parseFloat(c.curCSS(a,"marginLeft",true))||0}return{top:b,left:d}},setOffset:function(a,b,d){if(/static/.test(c.curCSS(a,"position")))a.style.position="relative";var f=c(a),e=f.offset(),j=parseInt(c.curCSS(a,"top",true),10)||0,i=parseInt(c.curCSS(a,"left",true),10)||0;if(c.isFunction(b))b=b.call(a, +d,e);d={top:b.top-e.top+j,left:b.left-e.left+i};"using"in b?b.using.call(a,d):f.css(d)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),f=/^body|html$/i.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.curCSS(a,"marginTop",true))||0;d.left-=parseFloat(c.curCSS(a,"marginLeft",true))||0;f.top+=parseFloat(c.curCSS(b[0],"borderTopWidth",true))||0;f.left+=parseFloat(c.curCSS(b[0],"borderLeftWidth",true))||0;return{top:d.top- +f.top,left:d.left-f.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||s.body;a&&!/^body|html$/i.test(a.nodeName)&&c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(f){var e=this[0],j;if(!e)return null;if(f!==w)return this.each(function(){if(j=wa(this))j.scrollTo(!a?f:c(j).scrollLeft(),a?f:c(j).scrollTop());else this[d]=f});else return(j=wa(e))?"pageXOffset"in j?j[a?"pageYOffset": +"pageXOffset"]:c.support.boxModel&&j.document.documentElement[d]||j.document.body[d]:e[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase();c.fn["inner"+b]=function(){return this[0]?c.css(this[0],d,false,"padding"):null};c.fn["outer"+b]=function(f){return this[0]?c.css(this[0],d,false,f?"margin":"border"):null};c.fn[d]=function(f){var e=this[0];if(!e)return f==null?null:this;if(c.isFunction(f))return this.each(function(j){var i=c(this);i[d](f.call(this,j,i[d]()))});return"scrollTo"in +e&&e.document?e.document.compatMode==="CSS1Compat"&&e.document.documentElement["client"+b]||e.document.body["client"+b]:e.nodeType===9?Math.max(e.documentElement["client"+b],e.body["scroll"+b],e.documentElement["scroll"+b],e.body["offset"+b],e.documentElement["offset"+b]):f===w?c.css(e,d):this.css(d,typeof f==="string"?f:f+"px")}});A.jQuery=A.$=c})(window); Index: content/resources/docs/1.99.4/_static/minus.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/minus.png =================================================================== --- content/resources/docs/1.99.4/_static/minus.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/minus.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/minus.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/plus.png =================================================================== Cannot display: file marked as a binary type. svn:mime-type = application/octet-stream Index: content/resources/docs/1.99.4/_static/plus.png =================================================================== --- content/resources/docs/1.99.4/_static/plus.png (revision 1641479) +++ content/resources/docs/1.99.4/_static/plus.png (working copy) Property changes on: content/resources/docs/1.99.4/_static/plus.png ___________________________________________________________________ Added: svn:mime-type ## -0,0 +1 ## +application/octet-stream \ No newline at end of property Index: content/resources/docs/1.99.4/_static/pygments.css =================================================================== --- content/resources/docs/1.99.4/_static/pygments.css (revision 0) +++ content/resources/docs/1.99.4/_static/pygments.css (working copy) @@ -0,0 +1,60 @@ +.highlight .hll { background-color: #ffffcc } +.highlight { background: #ffffff; } +.highlight .c { color: #999988; font-style: italic } /* Comment */ +.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */ +.highlight .k { font-weight: bold } /* Keyword */ +.highlight .o { font-weight: bold } /* Operator */ +.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */ +.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */ +.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .gr { color: #aa0000 } /* Generic.Error */ +.highlight .gh { color: #999999 } /* Generic.Heading */ +.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */ +.highlight .go { color: #888888 } /* Generic.Output */ +.highlight .gp { color: #555555 } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #aaaaaa } /* Generic.Subheading */ +.highlight .gt { color: #aa0000 } /* Generic.Traceback */ +.highlight .kc { font-weight: bold } /* Keyword.Constant */ +.highlight .kd { font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { font-weight: bold } /* Keyword.Pseudo */ +.highlight .kr { font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */ +.highlight .m { color: #009999 } /* Literal.Number */ +.highlight .s { color: #bb8844 } /* Literal.String */ +.highlight .na { color: #008080 } /* Name.Attribute */ +.highlight .nb { color: #999999 } /* Name.Builtin */ +.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */ +.highlight .no { color: #008080 } /* Name.Constant */ +.highlight .ni { color: #800080 } /* Name.Entity */ +.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */ +.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */ +.highlight .nn { color: #555555 } /* Name.Namespace */ +.highlight .nt { color: #000080 } /* Name.Tag */ +.highlight .nv { color: #008080 } /* Name.Variable */ +.highlight .ow { font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mf { color: #009999 } /* Literal.Number.Float */ +.highlight .mh { color: #009999 } /* Literal.Number.Hex */ +.highlight .mi { color: #009999 } /* Literal.Number.Integer */ +.highlight .mo { color: #009999 } /* Literal.Number.Oct */ +.highlight .sb { color: #bb8844 } /* Literal.String.Backtick */ +.highlight .sc { color: #bb8844 } /* Literal.String.Char */ +.highlight .sd { color: #bb8844 } /* Literal.String.Doc */ +.highlight .s2 { color: #bb8844 } /* Literal.String.Double */ +.highlight .se { color: #bb8844 } /* Literal.String.Escape */ +.highlight .sh { color: #bb8844 } /* Literal.String.Heredoc */ +.highlight .si { color: #bb8844 } /* Literal.String.Interpol */ +.highlight .sx { color: #bb8844 } /* Literal.String.Other */ +.highlight .sr { color: #808000 } /* Literal.String.Regex */ +.highlight .s1 { color: #bb8844 } /* Literal.String.Single */ +.highlight .ss { color: #bb8844 } /* Literal.String.Symbol */ +.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */ +.highlight .vc { color: #008080 } /* Name.Variable.Class */ +.highlight .vg { color: #008080 } /* Name.Variable.Global */ +.highlight .vi { color: #008080 } /* Name.Variable.Instance */ +.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */ \ No newline at end of file Index: content/resources/docs/1.99.4/_static/searchtools.js =================================================================== --- content/resources/docs/1.99.4/_static/searchtools.js (revision 0) +++ content/resources/docs/1.99.4/_static/searchtools.js (working copy) @@ -0,0 +1,560 @@ +/* + * searchtools.js_t + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilties for the full-text search. + * + * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words, hlwords is the list of normal, unstemmed + * words. the first one is used to find the occurance, the + * latter for highlighting it. + */ + +jQuery.makeSearchSummary = function(text, keywords, hlwords) { + var textLower = text.toLowerCase(); + var start = 0; + $.each(keywords, function() { + var i = textLower.indexOf(this.toLowerCase()); + if (i > -1) + start = i; + }); + start = Math.max(start - 120, 0); + var excerpt = ((start > 0) ? '...' : '') + + $.trim(text.substr(start, 240)) + + ((start + 240 - text.length) ? '...' : ''); + var rv = $('
').text(excerpt); + $.each(hlwords, function() { + rv = rv.highlightText(this, 'highlighted'); + }); + return rv; +} + + +/** + * Porter Stemmer + */ +var Stemmer = function() { + + var step2list = { + ational: 'ate', + tional: 'tion', + enci: 'ence', + anci: 'ance', + izer: 'ize', + bli: 'ble', + alli: 'al', + entli: 'ent', + eli: 'e', + ousli: 'ous', + ization: 'ize', + ation: 'ate', + ator: 'ate', + alism: 'al', + iveness: 'ive', + fulness: 'ful', + ousness: 'ous', + aliti: 'al', + iviti: 'ive', + biliti: 'ble', + logi: 'log' + }; + + var step3list = { + icate: 'ic', + ative: '', + alize: 'al', + iciti: 'ic', + ical: 'ic', + ful: '', + ness: '' + }; + + var c = "[^aeiou]"; // consonant + var v = "[aeiouy]"; // vowel + var C = c + "[^aeiouy]*"; // consonant sequence + var V = v + "[aeiou]*"; // vowel sequence + + var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + + +/** + * Search Module + */ +var Search = { + + _index : null, + _queued_query : null, + _pulse_status : -1, + + init : function() { + var params = $.getQueryParameters(); + if (params.q) { + var query = params.q[0]; + $('input[name="q"]')[0].value = query; + this.performSearch(query); + } + }, + + loadIndex : function(url) { + $.ajax({type: "GET", url: url, data: null, success: null, + dataType: "script", cache: true}); + }, + + setIndex : function(index) { + var q; + this._index = index; + if ((q = this._queued_query) !== null) { + this._queued_query = null; + Search.query(q); + } + }, + + hasIndex : function() { + return this._index !== null; + }, + + deferQuery : function(query) { + this._queued_query = query; + }, + + stopPulse : function() { + this._pulse_status = 0; + }, + + startPulse : function() { + if (this._pulse_status >= 0) + return; + function pulse() { + Search._pulse_status = (Search._pulse_status + 1) % 4; + var dotString = ''; + for (var i = 0; i < Search._pulse_status; i++) + dotString += '.'; + Search.dots.text(dotString); + if (Search._pulse_status > -1) + window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something + */ + performSearch : function(query) { + // create the required interface elements + this.out = $('#search-results'); + this.title = $('

' + _('Searching') + '

').appendTo(this.out); + this.dots = $('').appendTo(this.title); + this.status = $('

').appendTo(this.out); + this.output = $('