Inspired by Postgres 
Common use case - bulk data load through JDBC/ODBC interface. Currently it is only possible to execute single commands one by one. We already can batch them to improve performance, but there is still big room for improvement.
We should think of a completely new command - COPY. It will accept a file (or input stream in general case) on the client side, then transfer data to the cluster, and then execute update inside the cluster, e.g. through streamer.
First of all we need to create quick and dirty prototype to assess potential performance improvement. It speedup is confirmed, we should build base implementation which will accept only files. But at the same time we should understand how it will evolve in future: multiple file formats (probably including Hadoop formarts, e.g. Parquet), escape characters, input streams, etc..
We may want to gradually add features to this command in future to have something like this:
- We support CSV format described in RFC 4180.
- Custom row and column separators, quoting characters are currently hardcoded
- Escape sequences, line comment characters are currently not supported
- We may want to support fixed-length formats (via format descriptors) in future
- We may want to strip comments from lines (for example, starting with '#')
- We may want to allow user to either ignore empty lines or treat them as a special case of record having all default values
- We may allow user to enable whitespace trimming from beginning and end of a line
- We may want to allow user to specify error handling strategy: e.g., only one quote character is present or escape sequence is invalid.
- File character set to be supported in future
- Skipped/imported row number (or first/last line or skip header option), skipped/imported column number (or first/last column): to be supported in future
- Line start pattern (as in MySQL): no support planned
- We currently support only client-side import. No server-side file import.
- We may want to support client-side stdin import in future.
- We do not handle importing multiple files from single command
- We don't benefit from any kind of pre-sorting pre-partitioning data on client side.
- We don't include any any metadata, such as line number from client side.
- We send file data via batches. In future we will support batch size (specified with rows per batch or data block size
- We may want to implement data compression in future.
- We connect to single node in JDBC driver (no multi-node connections).
- We don't create table in the bulk load command
- We may want to have and option for reading header row, which contains column names to match columns
- In future we may wish to support COLUMNS (col1, , col2, _, col3) syntax, where '' marker means a skipped column (MySQL uses '@dummy' for this)
- Data types are converted as if they were supplied to INSERT SQL command.
- We may want type conversion (automatic, custom using sql function, custom via Java code, string auto-trimming) in future.
- We will support optional null sequence ("\N") later
- We may want to allow user to specify what to do if the same record exists (e.g., ignore record, replace it, report error with a max. error counter before failing the command)
- We don't currently support any generated/autoincremented row IDs or any custom generators.
- We don't support any filtering/conditional expressions
- We don't support any files/recordsets/tables with multiple conversion errors generated during import.
- We may want an option to select how do we insert the data into cache: using cache.putAll(...), for example, or via data streamer interface (see BACKEND option)
- We don't use transactions
- We don't create locks on rows or tables.
- We don't try to minimize any indexing overhead (it's up to the user)
- We may want to minimize WAL impact in future via NOLOGGING option.
- We don't supply an utility to load data
- We don't currently supply any java loaders (as in PG and MSSQL) that stream data (not neccessary from file)
- Security-related questions are out of scope of this JIRA
- We don't have triggers and constraints in Apache Ignite
- Server-side file import
- Client-side: only from STDIN
- Protocol implementation: via special command in the protocol
- Special bulk data loaders in implemented as part of JDBC driver package: org.postgresql.copy.CopyManager
- Custom loaders available (e.g., https://github.com/bytefish/PgBulkInsert.git)
- Both client- and server-side import
- Protocol implementation via a hack: if result set returned with column count == -1, read file name from server and send it immediately.
- Server-side import
- CLI utility to import from client side
- Protocol implementation: Special packet types: column definition and row
- Custom bulk data supplied in JDBC driver package: com.microsoft.sqlserver.jdbc.SqlServerBulkCopy.
There is no bulk load SQL command. Bulk loading external data can be achieved via:
- Oracle External Tables
- There is a separate utility for Oracle TimesTen in-memory database:
- Apache Hive:
- Apache HBase:
- SAP IQ:
(creating pipelines and connecting them to LOAD DATA statement is also a notable feature)
- IBM DB2:
- IBM Informix:
- Apache Derby (AKA Java DB, Apache DB):
- Google Cloud Spanner: