Replies: 2 comments 2 replies
-
|
Hi @krittidet |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
What does it really need? Why not just stand up a second instance of the nextcloud container on a node with a GPU, and point it at the same database, with file locking in a shared redis? No one external has to ever call it - you can just use the recognize TUI commands. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m running Nextcloud on a Kubernetes cluster, but this cluster doesn’t have any GPU resources.
For better performance, I’d like to run the Recognize app (or its backend service) on another machine that has a GPU, and let Nextcloud send recognition requests to that remote server.
Is it possible to configure Recognize this way — as a standalone service that Nextcloud can call externally?
If yes, could you please provide some guidance or configuration examples?
Beta Was this translation helpful? Give feedback.
All reactions