Skip to content

Commit

Permalink
replace 'master' branch ref to 'main' for onnx repo (microsoft#12678)
Browse files Browse the repository at this point in the history
  • Loading branch information
fs-eire authored Aug 30, 2022
1 parent 9aefcc2 commit 1a402a3
Show file tree
Hide file tree
Showing 54 changed files with 271 additions and 272 deletions.
2 changes: 1 addition & 1 deletion docs/Reduced_Operator_Kernel_build.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ The opset can match either the opset import for each model, or the initial ONNX
e.g. if a model imports opset 12 of ONNX, all ONNX operators in that model can be listed under opset 12 for the 'ai.onnx' domain.

[Netron](https://netron.app/) can be used to view an ONNX model properties to discover the opset imports.
Additionally, the ONNX operator specs for [DNN](https://github.com/onnx/onnx/blob/master/docs/Operators.md) and [traditional ML](https://github.com/onnx/onnx/blob/master/docs/Operators-ml.md) operators list the individual operator versions.
Additionally, the ONNX operator specs for [DNN](https://github.com/onnx/onnx/blob/main/docs/Operators.md) and [traditional ML](https://github.com/onnx/onnx/blob/main/docs/Operators-ml.md) operators list the individual operator versions.

### Type reduction format

Expand Down
3 changes: 1 addition & 2 deletions docs/Versioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ models that are stamped with ONNX opset versions in the range [7-9].
### Version matrix
The [table](https://onnxruntime.ai/docs/reference/compatibility.html#onnx-opset-support) summarizes the relationship between the ONNX Runtime version and the ONNX opset version implemented in that release.
Please note the backward compatibility notes above.
For more details on ONNX Release versions, see [this page](https://github.com/onnx/onnx/blob/master/docs/Versioning.md).
For more details on ONNX Release versions, see [this page](https://github.com/onnx/onnx/blob/main/docs/Versioning.md).

## Tool Compatibility
A variety of tools can be used to create ONNX models. Unless otherwise noted, please use the latest released version of the tools to convert/export the ONNX model. Most tools are backwards compatible and support multiple ONNX versions. Join this with the table above to evaluate ONNX Runtime compatibility.
Expand All @@ -40,4 +40,3 @@ A variety of tools can be used to create ONNX models. Unless otherwise noted, pl
|[Paddle2ONNX](https://pypi.org/project/paddle2onnx/)| [Latest stable](https://github.com/PaddlePaddle/Paddle2ONNX/releases) | 1.6-1.9 |
|[AutoML](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-automated-ml)|[1.0.39+](https://pypi.org/project/azureml-automl-core)|1.5|
| |[1.0.33](https://pypi.org/project/azureml-automl-core/1.0.33/)|1.4|

2 changes: 1 addition & 1 deletion docs/onnxruntime_extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ e2e_graph = helper.make_graph(
)
# ...
```
For more usage of ONNX helper, please visit the document [Python API Overview](https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md).
For more usage of ONNX helper, please visit the document [Python API Overview](https://github.com/onnx/onnx/blob/main/docs/PythonAPIOverview.md).

### Run E2E Model in Python
```python
Expand Down
14 changes: 7 additions & 7 deletions docs/python/inference/api_summary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Load and run a model
--------------------

InferenceSession is the main class of ONNX Runtime. It is used to load and run an ONNX model,
as well as specify environment and application configuration options.
as well as specify environment and application configuration options.

.. code-block:: python
Expand Down Expand Up @@ -65,7 +65,7 @@ Data on CPU
^^^^^^^^^^^

On CPU (the default), OrtValues can be mapped to and from native Python data structures: numpy arrays, dictionaries and lists of
numpy arrays.
numpy arrays.

.. code-block:: python
Expand Down Expand Up @@ -95,15 +95,15 @@ this called `IOBinding`.

To use the `IOBinding` feature, replace `InferenceSession.run()` with `InferenceSession.run_with_iobinding()`.

A graph is executed on a device other than CPU, for instance CUDA. Users can
A graph is executed on a device other than CPU, for instance CUDA. Users can
use IOBinding to copy the data onto the GPU.

.. code-block:: python
# X is numpy array on cpu
# X is numpy array on cpu
session = onnxruntime.InferenceSession('model.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']))
io_binding = session.io_binding()
# OnnxRuntime will copy the data over to the CUDA device if 'input' is consumed by nodes on the CUDA device
# OnnxRuntime will copy the data over to the CUDA device if 'input' is consumed by nodes on the CUDA device
io_binding.bind_cpu_input('input', X)
io_binding.bind_output('output')
session.run_with_iobinding(io_binding)
Expand All @@ -122,7 +122,7 @@ The input data is on a device, users directly use the input. The output data is
session.run_with_iobinding(io_binding)
Y = io_binding.copy_outputs_to_cpu()[0]
The input data and output data are both on a device, users directly use the input and also place output on the device.
The input data and output data are both on a device, users directly use the input and also place output on the device.

.. code-block:: python
Expand Down Expand Up @@ -284,7 +284,7 @@ Backend

In addition to the regular API which is optimized for performance and usability, 
*ONNX Runtime* also implements the
`ONNX backend API <https://github.com/onnx/onnx/blob/master/docs/ImplementingAnOnnxBackend.md>`_
`ONNX backend API <https://github.com/onnx/onnx/blob/main/docs/ImplementingAnOnnxBackend.md>`_
for verification of *ONNX* specification conformance.
The following functions are supported:

Expand Down
4 changes: 2 additions & 2 deletions docs/python/inference/examples/plot_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
ONNX Runtime Backend for ONNX
=============================
*ONNX Runtime* extends the
`onnx backend API <https://github.com/onnx/onnx/blob/master/docs/ImplementingAnOnnxBackend.md>`_
*ONNX Runtime* extends the
`onnx backend API <https://github.com/onnx/onnx/blob/main/docs/ImplementingAnOnnxBackend.md>`_
to run predictions using this runtime.
Let's use the API to compute the prediction
of a simple logistic regression model.
Expand Down
2 changes: 1 addition & 1 deletion docs/python/inference/examples/plot_load_and_predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

#########################
# Let's load a very simple model.
# The model is available on github `onnx...test_sigmoid <https://github.com/onnx/onnx/tree/master/onnx/backend/test/data/node/test_sigmoid>`_.
# The model is available on github `onnx...test_sigmoid <https://github.com/onnx/onnx/blob/main/onnx/backend/test/data/node/test_sigmoid>`_.

example1 = get_example("sigmoid.onnx")
sess = rt.InferenceSession(example1, providers=rt.get_available_providers())
Expand Down
4 changes: 2 additions & 2 deletions docs/python/inference/examples/plot_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
===============
There is no other way to look into one model stored
in ONNX format than looking into its node with
in ONNX format than looking into its node with
*onnx*. This example demonstrates
how to draw a model and to retrieve it in *json*
format.
Expand Down Expand Up @@ -34,7 +34,7 @@
#################################
# Draw a model with ONNX
# ++++++++++++++++++++++
# We use `net_drawer.py <https://github.com/onnx/onnx/blob/master/onnx/tools/net_drawer.py>`_
# We use `net_drawer.py <https://github.com/onnx/onnx/blob/main/onnx/tools/net_drawer.py>`_
# included in *onnx* package.
# We use *onnx* to load the model
# in a different way than before.
Expand Down
36 changes: 18 additions & 18 deletions java/src/test/java/ai/onnxruntime/OnnxMl.java

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions js/web/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,11 @@ Refer to the following links for development information:

#### WebAssembly backend

ONNX Runtime Web currently support all operators in [ai.onnx](https://github.com/onnx/onnx/blob/master/docs/Operators.md) and [ai.onnx.ml](https://github.com/onnx/onnx/blob/master/docs/Operators-ml.md).
ONNX Runtime Web currently support all operators in [ai.onnx](https://github.com/onnx/onnx/blob/main/docs/Operators.md) and [ai.onnx.ml](https://github.com/onnx/onnx/blob/main/docs/Operators-ml.md).

#### WebGL backend

ONNX Runtime Web currently supports a subset of operators in [ai.onnx](https://github.com/onnx/onnx/blob/master/docs/Operators.md) operator set. See [operators.md](./docs/operators.md) for a complete, detailed list of which ONNX operators are supported by WebGL backend.
ONNX Runtime Web currently supports a subset of operators in [ai.onnx](https://github.com/onnx/onnx/blob/main/docs/Operators.md) operator set. See [operators.md](./docs/operators.md) for a complete, detailed list of which ONNX operators are supported by WebGL backend.

## License

Expand Down
Loading

0 comments on commit 1a402a3

Please sign in to comment.