You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
first of all kudos for photoprism! I think it is a great idea to factor out the ML part to a microservice, but i'm really wondering whether it is a good idea to develop an ML server from scratch. Especially looking at the current code, this one hard-codes models, and that is really not something that makes things scalable. there are plenty of mature ML serving development out there, TF has it's own TensorFlow Serving but that of course has its own problem, meaning it ties you to use a specific backend, namely TF.
first of all kudos for photoprism! I think it is a great idea to factor out the ML part to a microservice, but i'm really wondering whether it is a good idea to develop an ML server from scratch. Especially looking at the current code, this one hard-codes models, and that is really not something that makes things scalable. there are plenty of mature ML serving development out there, TF has it's own
TensorFlow Serving
but that of course has its own problem, meaning it ties you to use a specific backend, namely TF.https://github.com/roboflow/inference is something that comes to mind that is backend agnostic, but here's a quite good list of possible options: https://github.com/topics/inference-server
The text was updated successfully, but these errors were encountered: