Sometimes the Java stack traces were swallowed when submitting the pyflink jobs via "flink run", e.g.:
File "/home/cdh272705/poc/T24_parse.py", line 179, in from_kafka_to_oracle_demo
File "/home/cdh272705/.local/lib/python3.6/site-packages/pyflink/table/table.py", line 783, in execute_insert
return TableResult(self._j_table.executeInsert(table_path, overwrite))
File "/home/cdh272705/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1286, in _call_
answer, self.gateway_client, self.target_id, self.name)
File "/home/cdh272705/.local/lib/python3.6/site-packages/pyflink/util/exceptions.py", line 154, in deco
raise exception_mapping[exception](s.split(': ', 1), stack_trace)
pyflink.util.exceptions.TableException: 'Failed to execute sql'
The Java stack traces under the TableException were swallowed, which makes the troubleshooting difficult.
We need to improve the error reporting logic.