Description:
I would like to serve the gpt-oss-20B model using Triton Inference Server on a setup with 4× Tesla T4 GPUs.
Questions:
Is it feasible to run gpt-oss-20B on this hardware configuration?
If yes, what is the recommended setup or configuration steps to achieve this?
If no, what would be the alternative approaches or best practices to serve this model on Tesla T4 GPUs?