-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Config file #7
Comments
Hi miaoqiz, You may follow the running format in run.sh. python run.py $CONFIG.py The path of the config file could be an absolute or relative path. :) Best, |
Hi, Thanks again for the quick reply! The "run.sh" script ran well after all the modality datasets were collected. However, if following the three lines of commands in "GETTING_STARTED.md":
has the following error: ...data and model loaded When printing the length of "train_loader", "test_loader", and "val_loader", they are all "0". Thus it cannot enter the inference session inside "test(...)" function in "run.py". Also, where do you define the source of data retrieved in the following lines in "get_data.py". trainSet = Preprocessor(cfg, partition['train'], data_dict) In the "config", it says: 'dataset': {'name': 'demo', 'mode': ['image']} Does it mean it is located in the "data/demo" folder? Also, what is the output like for cut-scenes? are there individual videos or time code? Thanks so much and have a good day! |
Hi there, How are you? Thanks for updating the code base! There seems to be an issue when running: python run.py ../config/demo.py ...visualize scene video in demo mode, the above quantitive metrics are invalid Can you kindly advise? Also, the sample link in "pre/demodownload.py" seems to yield error related to "key". Thanks so much and have a good day! |
Hi @miaoqiz Thanks for your question. For sure, since the demo video has no ground truth, the script
Best,
|
@AnyiRao Thanks! Do you mean the "pair_list" is used to construct ground truth images if available? specifically, since in this scenario, the video is from somewhere else, after we segment the video, there are no ground-truth frames to compare with, the "pair_list" is thereby empty. Also, can you kindly explain how to interpret the file format inside the "shot_txt" folder? There are 5 columns. It is unclear which one is which. Lastly, there seems to be a small typo in "pre/place/extract_feat.py" at line#191: img_path = get_img_folder(args.img_path, video_id) Do you mean - img_path = get_img_folder(args.source_img_path, video_id) Thanks so much and have a good day! |
Hi @miaoqiz Thanks for your question. The code segment you are referring to is https://github.com/AnyiRao/SceneSeg/blob/master/lgss/utilis/dataset_utilis.py#L43 The purpose of As stated in #5, Thanks for pointing out. Already fixed. 👍 Best,
|
@AnyiRao Thanks so much! The link is: https://www.youtube.com/watch?v=BSG5iHK9Scw The content is similar to what you had before. Visually you can observe many scene changes. Can you kindly help to see if there are any scenes to be detected? Thanks so much! |
@miaoqiz Have you solved the problem,‘TypeError: 'NoneType' object is not iterable’ |
As @AnyiRao explained, probably only for some videos, the tool is able to detect scenes by converting shots. Thanks! |
@miaoqiz the scenes are getting detected only for the sample video. For any other movie clip (marvel kind of action / drama) , the scenes are not getting detected and therefore I get the NoneType error. Did you manage find a way to get this working somehow for other movie clips? |
Hi,
To run "run.py" inside "lgss", there seems to a "Config" file:
Can you kindly point us to the path to it if it is somewhere in the current repository? or can you kindly provide a template?
Thanks so much and have a good day!
The text was updated successfully, but these errors were encountered: