Add benchmark for unified attention from vllm project https://github.com/vllm-project/vllm/blob/main/vllm/attention/ops/triton_unified_attention.py