Skip to content

Commit

Permalink
Merge pull request #67 from multimeric/docs
Browse files Browse the repository at this point in the history
Docs v2
  • Loading branch information
pr4deepr authored Aug 23, 2024
2 parents f9bf664 + 05ba85c commit 2925622
Show file tree
Hide file tree
Showing 7 changed files with 158 additions and 23 deletions.
21 changes: 4 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
[![PyPI - Downloads](https://img.shields.io/pypi/dm/napari-lattice)](https://pypistats.org/packages/napari-lattice)
[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-lattice)](https://napari-hub.org/plugins/napari-lattice)

This napari plugin allows deskewing, cropping, visualisation and designing custom analysis pipelines for lattice lightsheet data, particularly from the Zeiss Lattice Lightsheet. The plugin has also been otpimixed to run in headless mode.
This napari plugin allows deskewing, cropping, visualisation and designing custom analysis pipelines for lattice lightsheet data, particularly from the Zeiss Lattice Lightsheet. The plugin has also been optimized to run in headless mode.


## **Documentation**
Expand All @@ -17,10 +17,7 @@ Check the [Wiki page](https://github.com/BioimageAnalysisCoreWEHI/napari_lattice

*************


<p align="left">
<img src="https://raw.githubusercontent.com/BioimageAnalysisCoreWEHI/napari_lattice/master/resources/LLSZ_window.png" alt="LLSZ_overview" width="500" >
</p>
![](deskew.png)

**Functions**

Expand All @@ -44,25 +41,15 @@ Apply custom image processing workflows using `napari-workflows`.

Support will be added for more file formats in the future.

Sample lattice lightsheet data download: https://doi.org/10.5281/zenodo.7117784
Sample lattice lightsheet data download: <https://doi.org/10.5281/zenodo.7117784>

----------------------------------

This [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.

<!--
Don't miss the full getting started guide to set up your new package:
https://github.com/napari/cookiecutter-napari-plugin#getting-started
and review the napari docs for plugin developers:
https://napari.org/plugins/index.html
-->


## Contributing

Contributions are very welcome. Tests can be run with [tox], please ensure
the coverage at least stays the same before you submit a pull request.
Contributions are very welcome. Please refer to the [Development](./development) docs to get started.

## License

Expand Down
4 changes: 2 additions & 2 deletions core/lls_core/models/crop.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@ class CropParams(FieldAccessModel):
default = []
)
roi_subset: List[int] = Field(
description="A subset of all the ROIs to process. Each array item should be an index into the ROI list indicating an ROI to include. This allows you to process only a subset of the regions from a ROI file specified using the `roi_list` parameter. If `None`, it is assumed that you want to process all ROIs.",
description="A subset of all the ROIs to process. Each list item should be an index into the ROI list indicating an ROI to include. This allows you to process only a subset of the regions from a ROI file specified using the `roi_list` parameter. If `None`, it is assumed that you want to process all ROIs.",
default=None
)
z_range: Tuple[NonNegativeInt, NonNegativeInt] = Field(
default=None,
description="The range of Z slices to take. All Z slices before the first index or after the last index will be cropped out.",
description="The range of Z slices to take as a tuple of the form `(first, last)`. All Z slices before the first index or after the last index will be cropped out.",
cli_description="An array with two items, indicating the index of the first and last Z slice to include."
)

Expand Down
6 changes: 3 additions & 3 deletions core/lls_core/writers.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@
@dataclass
class Writer(ABC):
"""
A writer is an abstraction over the logic used to write image slices to disk
Writers need to work incrementally, in order that we don't need the entire multidimensional
image in memory at the same time
A writer is an abstraction over the logic used to write image slices to disk.
`Writer`s need to work incrementally, in order that we don't need the entire multidimensional
image in memory at the same time.
"""
lattice: LatticeData
roi_index: RoiIndex
Expand Down
78 changes: 78 additions & 0 deletions docs/api.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,83 @@
# Python Usage

## Introduction

The image processing workflow can also be controlled via Python API.

To do so, first define the parameters:

```python
from lls_core import LatticeData

params = LatticeData(
input_image="/path/to/some/file.tiff",
save_dir="/path/to/output"
)
```

Then save the result to disk:
```python
params.save()
```

Or work with the images in memory:
```python
for slice in params.process():
pass
```

Other more advanced options [are listed below](#lls_core.LatticeData).

## Cropping

Cropping functionality can be enabled by setting the `crop` parameter:

```python
from lls_core import LatticeData, CropParams

params = LatticeData(
input_image="/path/to/some/file.tiff",
save_dir="/path/to/output",
crop=CropParams(
roi_list=["/path/to/roi.zip"]
)
)
```

Other more advanced options [are listed below](#lls_core.CropParams).

## Type Checking

Because of Pydantic idiosyncrasies, the `LatticeData` constructor can accept more data types than the type system realises.
For example, `input_image="/some/path"` like we used above is not considered correct, because ultimately the input image has to become an `xarray` (aka `DataArray`).
You can solve this in three ways.

The first is to use the types precisely as defined. In this case, we might define the parameters "correctly" (if verbosely) like this:

```python
from lls_core import LatticeData
from aicsimageio import AICSImage
from pathlib import Path

params = LatticeData(
input_image=AICSImage("/path/to/some/file.tiff").xarray_dask_data(),
save_dir=Path("/path/to/output")
)
```

The second is to use `LatticeData.parse_obj`, which takes a dictionary of options and allows incorrect types:

```python
params = LatticeData.parse_obj({
"input_image": "/path/to/some/file.tiff",
"save_dir": "/path/to/output"
})
```

Finally, if you're using MyPy, you can install the [pydantic plugin](https://docs.pydantic.dev/latest/integrations/mypy/), which solves this problem via the `init_typed = False` option.

## API Docs

::: lls_core.LatticeData
options:
members:
Expand Down
18 changes: 17 additions & 1 deletion docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,9 @@ The CLI is defined using Typer: <https://typer.tiangolo.com/.>
These packages are used to define the GUI, which you can find in `plugin/napari_lattice`.
[`magicclass`](https://hanjinliu.github.io/magic-class/) builds on [`magicgui`](https://pyapp-kit.github.io/magicgui/) by providing the `@magicclass` decorator which turns a Python class into a GUI.

## Adding a new parameter
### Dev Workflows

### Adding a new parameter

Whenever a new parameter is added, the following components need to be updated:

Expand All @@ -48,6 +50,20 @@ Whenever a new parameter is added, the following components need to be updated:

An example of this can be found in this commit: <https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/pull/47/commits/16b28fec307f19e73b8d55e677621082037b2710>.

### Adding a new image reader

Currently there aren't image reader classes. Instead, we currently have a pydantic validator that converts the image from a path to an array, or from an array into an xarray. A new format could be implemented in this validator: <https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/blob/b33cc4ca5fe0fb89d730cefdbe3169f984f1fe89/core/lls_core/models/deskew.py#L176-L202>

### Adding a new image writer

1. Create a new writer which inherits from the `lls_core.writers.Writer` class, and implements its `write_slice` method:

::: lls_core.writers.Writer.write_slice

2. [Add a new option to the `SaveFileType` enum](https://github.com/multimeric/napari_lattice/blob/b33cc4ca5fe0fb89d730cefdbe3169f984f1fe89/core/lls_core/models/output.py#L11-L16)

3. [Then, return the correct writer class based on the enum value](https://github.com/BioimageAnalysisCoreWEHI/napari_lattice/blob/b33cc4ca5fe0fb89d730cefdbe3169f984f1fe89/core/lls_core/models/lattice_data.py#L474-L480).

## Testing

The tests are run using [pytest](https://docs.pytest.org/en/7.4.x/).
Expand Down
50 changes: 50 additions & 0 deletions docs/workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Workflows

`lls_core` supports integration with [`napari-workflows`](https://github.com/haesleinhuepf/napari-workflows).
The advantage of this is that you can design a multi-step automated workflow that uses `lls_core` as the pre-processing step.

## Building a Workflow

You can design your workflow via GUI using [`napari-assistant`](https://github.com/haesleinhuepf/napari-assistant), or directly in the YAML format.

When building your workflow with Napari Assistant, you are actually building a *template* that will be applied to future images.
For this reason, you need to rename your input layer to `deskewed_image`, since this is the exact value that the `lls_core` step produces.

If you want to use YAML, you also have to make sure that the first workflow step to run takes `deskewed_image` as an input.
For example:

```yaml
!!python/object:napari_workflows._workflow.Workflow
_tasks:
median: !!python/tuple
- !!python/name:pyclesperanto_prototype.median_sphere ''
- deskewed_image
- null
- 2
- 2
- 2
```
Workflows are run once for each 3D slice of the image. In other words, the workflow is run separately for each timepoint, for each channel, for each region of interest (if cropping is enabled).
This means that you should design your workflow expecting that `deskewed_image` is an exactly 3D array.

If you want to define your own custom functions, you can do so in a `.py` file in the same directory as the workflow `.yml` file.
These will be imported before the workflow is executed.

## Running a Workflow

The `--workflow` command-line flag, the `LatticeData(workflow=)` Python parameter, and the Workflow tab of the plugin can be used to specify the path to a workflow `.yml` file .

If you're using the Python interface, you need to use [`LatticeData.process_workflow()`](api/#lls_core.LatticeData.process_workflow) rather than `.process()`.

## Outputs

`lls_core` supports workflows that have exactly one "leaf" task. This is defined as a task that is not used by any other tasks. In other works, it's the final task of the workflow.

If you want multiple return values, this task can return a tuple of values. Each of these values must be:

* An array, in which case it is treated as an image slice
* A `dict`, in which case it is treated as a single row of a data frame whose columns are the keys of the `dict`
* A `list`, in which case it is treated as a single row of a data frame whose columns are unnamed

Then, each slice is combined at the end. Image slices are stacked together into their original dimensionality, and data frame rows are stacked into a data frame with one row per channel, timepoint and ROI.
4 changes: 4 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ site_name: Napari Lattice

markdown_extensions:
- mkdocs-click
- pymdownx.highlight:
anchor_linenums: true
- pymdownx.superfences

plugins:
- search
Expand All @@ -10,6 +13,7 @@ plugins:
handlers:
python:
options:
heading_level: 3
show_root_heading: true
# Inheritance and source are useful for advanced users,
# but possibly confusing for others
Expand Down

0 comments on commit 2925622

Please sign in to comment.