Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accept shape tensors in Compiler #64

Closed
wants to merge 1 commit into from
Closed

Conversation

jhalakpatel
Copy link
Collaborator

No description provided.

@jhalakpatel jhalakpatel added tripy Pull request for the tripy project enhancement New feature or request labels Aug 8, 2024
@jhalakpatel jhalakpatel changed the title Accept shape tensors in Compiler #65: Accept shape tensors in Compiler Aug 8, 2024
@jhalakpatel jhalakpatel changed the title #65: Accept shape tensors in Compiler Accept shape tensors in Compiler Aug 8, 2024
@jhalakpatel jhalakpatel linked an issue Aug 8, 2024 that may be closed by this pull request
# Both dim dynamic
([(1, 2, 3), (4, 5, 6)], (1, 4), (2, 5), (3, 6)),
# min/opt/max specified as shape tensor
([tp.Shape([1, 2, 3])], (1,), (2,), (3,)),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pranavm-nvidia this is how you expected shape profile to be provided using shape tensor?

],
)
elif isinstance(elem, Shape):
elem = elem.data().data()
Copy link
Collaborator Author

@jhalakpatel jhalakpatel Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is suboptimal I guess. We should be able to slice shape tensors and populate shape bounds.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is elem a shape tensor? What you want is the shape itself be a shape tensor describing the shape of input.

a = tp.ones((2,3))
shape_of_a = a.shape
a_info = tp.InputInfo(shape_of_a, dtype=tp.float32) 

Your current approach is doing:

shape_of_a = a.shape
a_info = tp.InputInfo([shape_of_a[0], shape_of_a[1]], dtype=tp.float32) 

@jhalakpatel
Copy link
Collaborator Author

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================================== 1962 passed, 47 skipped, 1 deselected, 1 warning in 336.70s (0:05:36) ==================================================================================================

@jhalakpatel jhalakpatel force-pushed the compiler-shape-tensors branch 2 times, most recently from 23bf6d6 to f7bc564 Compare August 14, 2024 05:47
@jhalakpatel jhalakpatel force-pushed the compiler-shape-tensors branch from f7bc564 to 44cbdd6 Compare August 14, 2024 05:50
@jhalakpatel jhalakpatel marked this pull request as ready for review August 14, 2024 05:52
)
if len(elem) != 3:
if isinstance(shape, Shape):
assert shape.shape.rank == 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this shape.shape.rank and not shape.rank?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. I think shape.rank == 1 by construction, here I wanted to check len(shape.shape) == 1. #92 should help with this check once merged.

([(1, 2, 3), 4], (1, 4), (2, 4), (3, 4)),
# Both dim dynamic
([(1, 2, 3), (4, 5, 6)], (1, 4), (2, 5), (3, 6)),
# static shape via shape tensor
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if this is useful since we can not encode a dynamic dim here unless we allow something like: tp.Shape([(1, 2, 3), 4]) where (1, 2, 3) are min/opt/max.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@parthchadha If it does not make sense, I can close the PR.

@pranavm-nvidia mentioned that he filed the issue to add support for shape tensor inputs with value bounds but added the TODO (#252) incorrectly in InputInfo.

)
if len(elem) != 3:
if isinstance(shape, Shape):
assert shape.shape.rank == 1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. I think shape.rank == 1 by construction, here I wanted to check len(shape.shape) == 1. #92 should help with this check once merged.

@@ -95,7 +101,7 @@ def test_invalid_shape(self, shape, expected_error):
@pytest.fixture(scope="session")
def single_return_executable():
compiler = tp.Compiler(add)
return compiler.compile(tp.InputInfo((2, 2), dtype=tp.float32), tp.InputInfo((2, 2), dtype=tp.float32))
return compiler.compile(tp.InputInfo(tp.Shape([2, 2]), dtype=tp.float32), tp.InputInfo((2, 2), dtype=tp.float32))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was to support shape tensor inputs at runtime. I don't think we need to change anything about the compiler. InputInfo can be used to express the range of values the shape tensor can take.

[f"Shape: {shape} contains an element: {repr(elem)} with non-numerical value(s)"],
)
if len(elem) != 3:
if isinstance(shape, Shape):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there's any value in allowing shape tensors to compile. InputInfo already expresses everything we want AFAICT.

@jhalakpatel
Copy link
Collaborator Author

Closing this PR based on the above discussion.

@jhalakpatel jhalakpatel deleted the compiler-shape-tensors branch August 15, 2024 23:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request tripy Pull request for the tripy project
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Accept shape tensors in Compiler and propagate bounds
3 participants