You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the argument description, I assume args.dezoom_factor being 2.0 means that the Whole Slide Image has 40x as its base magnification (similar to aperio.AppMag in TCGA images). In lines 86 to 90 in the file 'patch_gen_hdf5.py', patch_size_resized is being calculated as
resize_factor = float(slide.properties.get('aperio.AppMag', 20)) / 20.0
if not slide.properties.get('aperio.AppMag', 20): print(f"magnifications for {slide_id} is not found, using default magnificantion 20X")
resize_factor = resize_factor * args.dezoom_factor
patch_size_resized = (int(resize_factor * patch_size[0]), int(resize_factor * patch_size[1]))
Here, if patch_size was (256, 256), the patch_size_resized becomes (1024, 1024). This is being resized to 256 in line number 118, which I think, makes it similar to 256x256 tile in 10x instead of 20x as it is originally intended to do (as mentioned in the paper).
Am I correctly assuming that args.dezoom_factor means the aperio.AppMag in TCGA?
I couldn't find any file that runs inference on a Whole Slide Image level where we can use the slide-level gene expression data with the Hazard Coefficients from supplementary data 5 to get the risk scores. Therefore, another question is how is the risk scores for TCGA predictions generated in the source data file named 'fig4e_risk_score_TCGA_pred.csv'?
The text was updated successfully, but these errors were encountered:
The args.dezoom_factor is actually a redundant variable here, as its default value was set to 1. This was originally implemented in case when the image is scanned using a scanner other than Aperio, and a hard parameter must be set to adjust the patch size. In our case, the resize factor can be automatically determined based on the "aperio.AppMag" parameter, so we do not need to change the args.dezoom_factor from its default.
To run inference on whole-slide images, we have uploaded our pre-trained models to this repo: https://huggingface.co/gevaertlab, and the models can be loaded following the "Step 5 (Optional): load published model checkpoint". A sample prediction script can be found here evaluation/predict.py
According to the argument description, I assume args.dezoom_factor being 2.0 means that the Whole Slide Image has 40x as its base magnification (similar to aperio.AppMag in TCGA images). In lines 86 to 90 in the file 'patch_gen_hdf5.py', patch_size_resized is being calculated as
Here, if patch_size was (256, 256), the patch_size_resized becomes (1024, 1024). This is being resized to 256 in line number 118, which I think, makes it similar to 256x256 tile in 10x instead of 20x as it is originally intended to do (as mentioned in the paper).
Am I correctly assuming that args.dezoom_factor means the aperio.AppMag in TCGA?
I couldn't find any file that runs inference on a Whole Slide Image level where we can use the slide-level gene expression data with the Hazard Coefficients from supplementary data 5 to get the risk scores. Therefore, another question is how is the risk scores for TCGA predictions generated in the source data file named 'fig4e_risk_score_TCGA_pred.csv'?
The text was updated successfully, but these errors were encountered: