Details
Description
Running a TensorFlow model is already supported through Camel DJL component. However, Camel users might prefer to externalise inferencing to an external server instead of running it inside the Camel route. For TensorFlow models, it is generally done with TensorFlow Serving, which is a REST API server for inferencing with TensorFlow. Camel should provide a producer component that makes it easy to invoke the TensorFlow specific REST API from the routes.