It would be nice to be able to make a table in a JDBC database appear as a table in Spark SQL. This would let users, for instance, perform a JOIN between a DataFrame in Spark SQL with a table in a Postgres database.
It might also be nice to be able to go the other direction – save a DataFrame to a database – for instance in an ETL job.
Edited to clarify: Both of these tasks are certainly possible to accomplish at the moment with a little bit of ad-hoc glue code. However, there is no fundamental reason why the user should need to supply the table schema and some code for pulling data out of a ResultSet row into a Catalyst Row structure when this information can be derived from the schema of the database table itself.