Skip to content

Sample Evaluation in an Interactive Environment

Yiyi Chen edited this page Feb 16, 2024 · 5 revisions

1. redirect to the repository:

cd MultiVec2Text

2. run interactive mode using singularity container and activate the conda environment (make sure the computing node has GPUs).

srun --gres=gpu:1 --time=12:00:00 --pty singularity shell --nv ~/pytorch_23.10-py3.sif 

source /home/xxxx/xxxx/miniconda3/etc/profile.d/conda.sh
conda activate v2t
python # get into python terminal

3. run the following command to evaluate.

3.1. define your own samples:

samples = ["ford wird aufgefordert 1,3 millionen suvs wegen abgasen zurückzurufen" ,"ford urged to recall 1.3 million suvs over exhaust fumes", "ford instó a retirar 1.3 millones suvs por el escape de humos", "ford doit rappeler 1,3 million de suv en raison des gaz d'échappement."]

3.2. evaluate:

from eval_samples import *

model_path="yiyic/t5_me5_base_mtg_en_fr_de_es_5m_32_corrector"

experiment, trainer = analyze_utils.load_experiment_and_trainer_from_pretrained(
        model_path, use_less_data=3000)

trainer, device = trainer_attributes(trainer, experiment)

# define correction steps
trainer.num_gen_recursive_steps = 10
# define sbeam 
trainer.sequence_beam_width = 1

# evaluate samples.
evaluate_samples(trainer, device, samples)