Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit ae10ffb

Browse files
committed
docs fixes for mobile docs
1 parent 8d3d5fa commit ae10ffb

File tree

4 files changed

+29
-8
lines changed

4 files changed

+29
-8
lines changed

docs/source/_static/css/tc_theme.css

+21
Original file line numberDiff line numberDiff line change
@@ -90,3 +90,24 @@ footer .rst-footer-buttons {
9090
footer p {
9191
font-size: 100%;
9292
}
93+
94+
/* Fixes for mobile - adopted from pytorch theme*/
95+
.wy-nav-top {
96+
background-color: #fff;
97+
background-image: url('../img/tc-logo-full-color-with-text-2.png');
98+
background-repeat: no-repeat;
99+
background-position: center;
100+
padding: 0;
101+
margin: 0.4045em 0.809em;
102+
color: #333;
103+
}
104+
105+
.wy-nav-top > a {
106+
display: none;
107+
}
108+
109+
@media screen and (max-width: 768px) {
110+
.wy-side-nav-search>a img.logo {
111+
height: 60px;
112+
}
113+
}

docs/source/conf.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -71,9 +71,9 @@
7171
# built documents.
7272
#
7373
# The short X.Y version.
74-
version = 'v0.1.0'
74+
version = 'v0.1.1'
7575
# The full version, including alpha/beta/rc tags.
76-
release = 'v0.1.0'
76+
release = 'v0.1.1'
7777

7878
# The language for content autogenerated by Sphinx. Refer to documentation
7979
# for a list of supported languages.

docs/source/framework/pytorch_integration/layers_database.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -256,7 +256,7 @@ Cast
256256
.. code::
257257
258258
def cast(float(M,N) A) -> (int32(M,N) O1) {{
259-
O1(m, n) = int32(A(m, n) + {four})
259+
O1(m, n) = int32(A(m, n) + {constant})
260260
}}
261261
262262
Copy

docs/source/tutorials/tutorial_tensordot_with_tc.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ For this tutorial, you will need to install Tensor Comprehensions binary. You ca
1010
get binary builds of Tensor Comprehensions with: ``conda install -y -c pytorch -c prigoyal tensor_comprehensions``
1111

1212
About TensorDot
13-
^^^^^^^^^^^^^^^
13+
---------------
1414

1515
Assume that we have two tensors, one with dimension :code:`(N, C1, C2, H, W)` and
1616
one with dimension :code:`(N, C2, C3, H, W)`, and we want to do a gemm-type
@@ -36,7 +36,7 @@ to TensorDot operation.
3636

3737
A simple 2D matrix multiply operation in TC is expressed as:
3838

39-
.. code-block:: python
39+
.. code::
4040
4141
def matmul(float(M, N) X, float(N, K) W) -> (output) {
4242
output(m, k) +=! X(m, nn) * W(nn, k)
@@ -47,7 +47,7 @@ The variable :code:`nn` is being reduced in above expression. Now, let's write a
4747
**batched matrix-multiply** operation using above expression. For that, we need to
4848
add a batch dimension to it and the expression becomes:
4949

50-
.. code-block:: python
50+
.. code::
5151
5252
def batch_matmul(float(B, M, N) X, float(B, N, K) W) -> (output) {
5353
output(b, m, k) +=! X(b, m, nn) * W(b, nn, k)
@@ -56,7 +56,7 @@ add a batch dimension to it and the expression becomes:
5656
Now, for the tensordot operation, we need to add spatial dimensions :code:`H` and :code:`W`
5757
to the batched matrix multiply, and the expression for TensorDot becomes:
5858

59-
.. code-block:: python
59+
.. code::
6060
6161
def tensordot(float(B, C1, C2, H, W) I0, float(B, C2, C3, H, W) I1) -> (O) {
6262
O(b, c1, c3, h, w) +=! I0(b, c1, c2, h, w) * I1(b, c2, c3, h, w)
@@ -140,7 +140,7 @@ get a decent kernel performance as shown in the screenshot below (tuned on one M
140140
:align: center
141141

142142
Early stopping
143-
^^^^^^^^^^^^^^
143+
--------------
144144

145145
If your kernel performance is good enough while the autotuning continues, you
146146
can stop autotuning by pressing :code:`Ctrl+C` and the autotuning cache will be saved

0 commit comments

Comments
 (0)