Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.16
-
None
Description
Background
Tika already has a Object Recognition Parser, a Video Labeling Parser and an ongoing PR for an Image Captioning Parser. All of these parsers are based on REST services, but currently there's no convenient way users can deploy all these REST services, There can be use cases such as user needs to use the Object Recognition Parser and the Image Captioning parser together. The entire implementation will be based on docker. Once implemented this, user will not have to build every single docker for object recognition, video labeling, image captioning etc. etc. User will only need to build the docker container which has this unified REST server and web gui. This docker container will build other docker containers for image captioning, object recognition etc. etc and host them as REST services whenever user needs them. For an example, if a user needs to use object recognition parser for the first time, he will have to only run this docker container(unified REST server with web gui), then user can activate object recognition service from the web gui. First time activating that service, it will automatically create the docker container for object recognition, then will make object recognition service available to the user.
Objectives
- Automating the docker container building process for the user
- Creating a convenient platform for the user where he can start/terminate REST services, see statistics of the model usage through a web gui
- Making the current REST services more stable using a reverse proxy server
Benefits
- Convenience
- Easy to implement a new parser with deep learning capabilities by finding an already trained NN model with any framework. i.e.framework doesn’t matter. We don't need to stick to Keras, Tensorflow or DL4J. If we can host the service as a web app. This platform can support it.