Skip to content

Commit

Permalink
Move intel_models to models/intel
Browse files Browse the repository at this point in the history
This creates a single hierarchy of all model documentation. It's also
the first step towards the splitting of Model Downloader config files
into per-model configs.
  • Loading branch information
Roman Donchenko committed Aug 1, 2019
1 parent 3bcadb3 commit f74648d
Show file tree
Hide file tree
Showing 158 changed files with 14 additions and 14 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
This repository includes optimized deep learning models and a set of demos to expedite development of high-performance deep learning inference applications. Use these free pre-trained models instead of training your own models to speed-up the development and production deployment process.

## Repository Components:
* [Pre-Trained Models](intel_models/index.md)
* [Pre-Trained Models](models/intel/index.md)
* [Model Downloader](tools/downloader/README.md) and other automation tools
* [Demos](demos/README.md) that demonstrate models usage with Deep Learning Deployment Toolkit
* [Accuracy Checker](tools/accuracy_checker/README.md) tool for models accuracy validation
Expand Down
2 changes: 1 addition & 1 deletion demos/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ To run the demo applications, you can use images and videos from the media files

> **NOTE:** Inference Engine HDDL and FPGA plugins are available in [proprietary](https://software.intel.com/en-us/openvino-toolkit) distribution only.
You can download the [pre-trained models](../intel_models/index.md) using the OpenVINO [Model Downloader](../tools/downloader/README.md) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
You can download the [pre-trained models](../models/intel/index.md) using the OpenVINO [Model Downloader](../tools/downloader/README.md) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
The table below shows the correlation between models, demos, and supported plugins. The plugins names are exactly as they are passed to the demos with `-d` option. The correlation between the plugins and supported devices see in the [Supported Devices](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html) section.

> **NOTE:** **MYRIAD** below stands for Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ Vision Processing Units.
Expand Down
2 changes: 1 addition & 1 deletion demos/crossroad_camera_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ reports person attributes like gender, has hat, has long-sleeved clothes
* `person-reidentification-retail-0079`, which is executed on top of the results from the first network and prints
a vector of features for each detected person. This vector is used to conclude if it is already detected person or not.

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

Other demo objectives are:
* Images/Video/Camera as inputs, via OpenCV*
Expand Down
2 changes: 1 addition & 1 deletion demos/gaze_estimation_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The demo also relies on the following auxiliary networks:
* `head-pose-estimation-adas-0001`, which estimates head pose in Tait-Bryan angles, serving as an input for gaze estimation model
* `facial-landmarks-35-adas-0002`, which estimates coordinates of facial landmarks for detected faces. The keypoints at the corners of eyes are used to locate eyes regions required for the gaze estimation model

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

Other demo objectives are:
* Video/Camera as inputs, via OpenCV*
Expand Down
2 changes: 1 addition & 1 deletion demos/human_pose_estimation_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This demo showcases the work of multi-person 2D pose estimation algorithm. The t

* `human-pose-estimation-0001`, which is a human pose estimation network, that produces two feature vectors. The algorithm uses these feature vectors to predict human poses.

For more information about the pre-trained model, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained model, refer to the [model documentation](../../models/intel/index.md).

The input frame height is scaled to model height, frame width is scaled to preserve initial aspect ratio and padded to multiple of 8.

Expand Down
2 changes: 1 addition & 1 deletion demos/interactive_face_detection_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This demo executes four parallel infer requests for the Age/Gender Recognition,
* `emotions-recognition-retail-0003`, which is executed on top of the results of the first model and reports an emotion for each detected face
* `facial-landmarks-35-adas-0002`, which is executed on top of the results of the first model and reports normed coordinates of estimated facial landmarks

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

Other demo objectives are:

Expand Down
2 changes: 1 addition & 1 deletion demos/multichannel_demo/fd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This demo provides an inference pipeline for multi-channel face detection. The demo uses Face Detection network. You can use the following pre-trained model with the demo:
* `face-detection-retail-0004`, which is a primary detection network for finding faces

For more information about the pre-trained models, refer to the [model documentation](../../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../../models/intel/index.md).

Other demo objectives are:

Expand Down
2 changes: 1 addition & 1 deletion demos/multichannel_demo/hpe/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This demo provides an inference pipeline for Multi-Channel Human Pose Estimation. The demo uses Human Pose Estimation network. You can use the following pre-trained model with the demos:
* `human-pose-estimation-0001`

For more information about the pre-trained models, refer to the [model documentation](../../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../../models/intel/index.md).

Other demo objectives are:

Expand Down
2 changes: 1 addition & 1 deletion demos/pedestrian_tracker_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ You can use a set of the following pre-trained models with the demo:
* _person-detection-retail-0013_, which is the primary detection network for finding pedestrians
* _person-reidentification-retail-0031_, which is the network that is executed on top of the results from inference of the first network and makes reidentification of the pedestrians

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

## How It Works

Expand Down
2 changes: 1 addition & 1 deletion demos/python_demos/action_recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The following pre-trained models are delivered with the product:
* `driver-action-recognition-adas-0002-encoder` + `driver-action-recognition-adas-0002-decoder`, which are models for driver monitoring scenario. They recognize actions like safe driving, talking to the phone and others
* `action-recognition-0001-encoder` + `action-recognition-0001-decoder`, which are general-purpose action recognition (400 actions) models for Kinetics-400 dataset.

For more information about the pre-trained models, refer to the [model documentation](../../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../../models/intel/index.md).

How It Works
------------
Expand Down
2 changes: 1 addition & 1 deletion demos/security_barrier_camera_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ reports general vehicle attributes, for example, vehicle type (car/van/bus/track
* `license-plate-recognition-barrier-0001`, which is executed on top of the results from the first network
and reports a string per recognized license plate

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

Other demo objectives are:
* Video/Camera as inputs, via OpenCV\*
Expand Down
2 changes: 1 addition & 1 deletion demos/smart_classroom_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ a vector of features for each detected face.
* `person-detection-raisinghand-recognition-0001`, which is a detection network for finding students and simultaneously predicting their current actions (in contrast with the previous model, predicts only if a student raising hand or not).
* `person-detection-action-recognition-teacher-0002`, which is a detection network for finding persons and simultaneously predicting their current actions.

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

## How It Works

Expand Down
2 changes: 1 addition & 1 deletion demos/super_resolution_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ You can use the following pre-trained model with the demo:
* `single-image-super-resolution-1033`, which is the primary and only model that
performs super resolution 4x upscale on a 200x200 image

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

## How It Works

Expand Down
2 changes: 1 addition & 1 deletion demos/text_detection_demo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The demo shows an example of using neural networks to detect and recognize print
* `text-recognition-0012`, which is a recognition network for recognizing text.
* `handwritten-score-recognition-0001`, which is a recognition network for recognizing handwritten score marks like `<digit>` or `<digit>.<digit>`.

For more information about the pre-trained models, refer to the [model documentation](../../intel_models/index.md).
For more information about the pre-trained models, refer to the [model documentation](../../models/intel/index.md).

## How It Works

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes.

0 comments on commit f74648d

Please sign in to comment.