You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* using [step parameterization](https://docs.zenml.io/user-guides/starter-guide/create-an-ml-pipeline#parametrizing-a-step)
35
+
and [step caching](https://docs.zenml.io/user-guides/starter-guide/cache-previous-executions#caching-at-a-step-level)
36
36
to design flexible and reusable steps
37
37
* using [custom data types for your artifacts and writing materializers for them](https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types)
38
-
* constructing and running a [ZenML pipeline](https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline)
38
+
* constructing and running a [ZenML pipeline](https://docs.zenml.io/user-guides/starter-guide/create-an-ml-pipeline)
39
39
* usage of ZenML Model Control Plane
40
40
* best practices for implementing and running reproducible and reliable ML
41
41
pipelines with ZenML
@@ -274,7 +274,7 @@ model:
274
274
275
275
The process of loading data is similar to training, even the same step function is used, but with the `is_inference` flag.
276
276
277
-
But inference flow has an important difference - there is no need to fit preprocessing sklearn `Pipeline`, rather we need to reuse one fitted during training on the train set, to ensure that the model object gets the expected input. To do so we will use the [Model interface](https://docs.zenml.io/user-guide/starter-guide/track-ml-models#configuring-a-model-in-a-pipeline) with lookup by artifact name inside a model context to get the preprocessing pipeline fitted during the quality-assured training run. This is possible since we configured the batch inference pipeline to run inside a Model Control Plane version context.
277
+
But inference flow has an important difference - there is no need to fit preprocessing sklearn `Pipeline`, rather we need to reuse one fitted during training on the train set, to ensure that the model object gets the expected input. To do so we will use the [Model interface](https://docs.zenml.io/user-guides/starter-guide/track-ml-models#configuring-a-model-in-a-pipeline) with lookup by artifact name inside a model context to get the preprocessing pipeline fitted during the quality-assured training run. This is possible since we configured the batch inference pipeline to run inside a Model Control Plane version context.
In the drift reporting stage, we will use [standard step](https://docs.zenml.io/stack-components/data-validators/evidently#the-evidently-data-validator)`evidently_report_step` to build Evidently report to assess certain data quality metrics. `evidently_report_step` has a number of options, but for this example, we will build only `DataQualityPreset` metrics preset to get a number of NA values in reference and current datasets.
300
300
301
-
We pass `dataset_trn` from the training pipeline as a `reference_dataset` here. To do so we will use the [Model interface](https://docs.zenml.io/user-guide/starter-guide/track-ml-models#configuring-a-model-in-a-pipeline) with lookup by artifact name inside a model context to get the training dataset used during quality-assured training run. This is possible since we configured the batch inference pipeline to run inside a Model Control Plane version context.
301
+
We pass `dataset_trn` from the training pipeline as a `reference_dataset` here. To do so we will use the [Model interface](https://docs.zenml.io/user-guides/starter-guide/track-ml-models#configuring-a-model-in-a-pipeline) with lookup by artifact name inside a model context to get the training dataset used during quality-assured training run. This is possible since we configured the batch inference pipeline to run inside a Model Control Plane version context.
302
302
303
303
After the report is built we execute another quality gate using the `drift_quality_gate` step, which assesses if a significant drift in the NA count is observed. If so, execution is stopped with an exception.
0 commit comments