You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to train a human-only segmentation model using the cityscapes dataset in open-mmlab/mmsegmentation.
So, I modified the dataset as follows, but it does not produce normal results.
Please advise.
CreateTrainIdLabelImgs.py was executed to create *_labelTrainIds.png files based on the contents of the changed labels.py. Please also check if this part is working properly.
result : I thought only person would be represented by gt_mask here. However, it can still recognize other objects, such as cars and power poles. Is this condition normal?
We trained it with the data. This part is on the mmsegmentation side, so it may be difficult to help, but I would appreciate it if you only refer to the results.
We trained from scratch without using pretrained-model (because it is a model trained with 19 classes). The loss is around -19.
After 1000 iter, the result is as follows. When I check the *_labelTrainIds.png of the test set, it is all white. So acc comes out 100, and the result is all think that the class is road. But I think that only person is set to train with trainId of 0 above.
The first picture is a randomly uploaded *_labelTrainIds.png from the test set, and the second picture is the evaluation result.
(Sorry for not posting a picture that exactly matches the description. This is the result of training with classId of person and rider set to 0 and 1.)
I've been trying to change the settings all day, but I can't solve it. Any help would be appreciated. I give you respect.
The text was updated successfully, but these errors were encountered:
Hi, greetings. Dont consider my words final because i have just start working on it and have not concluded, so i am sharing the intermediate knowledge.
First i am not using this code base, so may be my way of work can be different.
This is how you can proceed.
pass that dataset to pytorch dataloader for iteration purpose.
change all values to 0 in mask except person bit, it mean change all values to 0 and 24 (person) to 1. you can do it on dataset class by inheriting the cityscape class for pytorch and modifying it.
now you have a binary segmentation model.
you can use any segmentation framework, i say try qubvel work
you can take some help from my binary segmentation task for medical images link here
I'm trying to train a human-only segmentation model using the cityscapes dataset in open-mmlab/mmsegmentation.
So, I modified the dataset as follows, but it does not produce normal results.
Please advise.
export CITYSCAPES_DATASET=
pwd``python preparation/createTrainIdLabelImgs.py
result : I thought only person would be represented by gt_mask here. However, it can still recognize other objects, such as cars and power poles. Is this condition normal?

The first picture is a randomly uploaded *_labelTrainIds.png from the test set, and the second picture is the evaluation result.


(Sorry for not posting a picture that exactly matches the description. This is the result of training with classId of person and rider set to 0 and 1.)
I've been trying to change the settings all day, but I can't solve it. Any help would be appreciated. I give you respect.
The text was updated successfully, but these errors were encountered: