Skip to content

Benchmark Details #27

@agnesnatasya

Description

@agnesnatasya

Hi,

I am interested in replicating the benchmark setup as detailed in the Assise paper, and I would like to ask some details about the NFS and CephFS configuration.

In the experimental configuration part , it is stated that

  • Ceph
    • "machines are used as OSD and MDS replicas in Ceph"
    • "Ceph's cluster managers run on 2 additional testbed machines"
  • NFS
    • "NFS uses only one machine as server"

For Ceph,

  1. If the machine number is 2, as used in the write latency microbenchmark
    a. What is the number of data pool, metadata pool, and MDS replicas respectively?
  2. What does the 'Ceph cluster manager' mentioned in the paper refer to, is it the MDS replicas?
    a. If yes, does it mean that the MDS replica number will be 2, because it is run on 2 additional testbed machines?
  3. As the kernel buffer cache is limited to 3GB for all FS, is this only for the kernel page cache size? Are there any specifications on the MDS cache?

For NFS,

  1. If the machine number is 2, does it mean that there are 1 client and 1 server, or 2 clients and 1 server in the cluster?

For both,

  1. I am interested to know how do you set this linux page cache size to 3GB and make sure that the other clients' kernel buffer cache is totally empty before reading? I tried some options when mounting NFS, but it still looks like that the other client does some pre-reading on other writes that happen on other clients, as the benchmark value that I get for a Read-HIT and Read-MISS is the same.

Thank you very much for the kind help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions