Skip to content

Improve signature time estimation on cold start #360

Open
@jpsamaroo

Description

@jpsamaroo

Currently we don't do a whole lot to estimate the approximate time cost of a signature on "cold start" (where the signature has never been seen before). Our current approach assumes that the signature has a cost of 1 microsecond; even if that's accurate on average, it means that any signature that takes longer than that will be over-subscribed until the new cost is determined.

Here are some ideas for what we can do:

  • Execute max one at a time per processor, stalling other executions of the same signature until an estimate is available
  • Adjust cold start estimate based on an average of other signature estimates (potentially weighted by "closeness" to the target signature; i.e. when guessing cost of f(::Array), f(::Int) and g(::Array) should be weighted higher than g(::Float32))
  • Use reflection information (such as total instruction count, loop nesting depth, inlining costs, etc.) for cold start estimates. This might provide useful infrastructure for determining other behavior of the signature, such as allocation estimates, presence of yield, etc.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions