Skip to content

Commit 5a4d786

Browse files
committed
add readme_xing.md for LSH and FSA support
1 parent 51e3017 commit 5a4d786

16 files changed

+2455
-2180
lines changed

.gitattributes

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
executable/ZOPH_RNN_XING filter=lfs diff=lfs merge=lfs -text

README_XING.md

+163
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# Decoding with Finate State Acceptor (FSA), Locality Sensitive Hashing (LSH) and Word Alignemnt (WA)
2+
3+
This is the description for additional features built on top of Zoph\_RNN by Xing Shi.
4+
Please contact [Xing Shi](http://xingshi.me)([email protected]) for any questions
5+
6+
All the following papers are based on this code:
7+
8+
1. [Why Neural Translations are the Right Length](http://xingshi.me/data/pdf/EMNLP2016short.pdf)
9+
2. [Does String-Based Neural MT Learn Source Syntax?](http://xingshi.me/data/pdf/EMNLP2016long.pdf)
10+
3. [Generating Topical Poetry](http://xingshi.me/data/pdf/EMNLP2016poem.pdf) (for FSA decoding)
11+
4. [Hafez: an Interactive Poetry Generation System](http://xingshi.me/data/pdf/ACL2017demo.pdf) (for FSA decoding)
12+
5. [Speeding up Neural Machine Translation Decoding by Shrinking Run-time Vocabulary](http://xingshi.me/data/pdf/ACL2017short.pdf) (for LSH and WA decoding)
13+
14+
# Instructions for compilation/using the code
15+
The source code is provided in `src/` directory. You can compile the code into a standalone executable by:
16+
17+
```
18+
bash script/compile.xing.sh
19+
```
20+
21+
The executable `ZOPH_RNN_XING` will appear in the root folder. An pre-compiled executable `ZOPH_RNN_XING` is in folder `executable\`.
22+
23+
You should set the 7 environment variables and have the required libraries mentioned in `README.md` before compilation.
24+
Two additional requirements:
25+
26+
* CUDA version greater than 8.0
27+
* gcc version greater than 4.9.3
28+
29+
# Decoding with FSA
30+
31+
This section will describe how you can constrain the RNN decoding with a FSA so that the every output will be accepted by the FSA.
32+
33+
The FSA file format should follow the format defined by [Carmel](http://www.isi.edu/licensed-sw/carmel/).
34+
35+
All the script and sampled data file related with this section are in folder `scripts/fsa/`.
36+
37+
## Example data preparation
38+
39+
We will focus on a simple task : Translate numbers into letters, and use a fsa file to force the output to have following words: `["lstm","is","great","slow","luckily","we","make","it","fast","enough","and","with","fsa"]`
40+
41+
To generate fsa file and coresponding train and dev set:
42+
43+
```
44+
cd scripts/fsa/
45+
python generate_fsa.py
46+
```
47+
48+
It will generate the following:
49+
50+
* source.train.txt : 6 number sentences.
51+
* target.train.txt : 6 letter sentences.
52+
* source.valid.txt : 2 number sentences.
53+
* target.valid.txt : 2 letter sentences.
54+
* fsa.txt
55+
56+
Here, we define `EXEC=../../../executable/ZOPH_RNN_XING`.
57+
58+
### [Train] Train the translation model
59+
60+
```
61+
$EXEC -t source.train.txt target.train.txt model.nn -L 15 -H 40 -N 1 -M 0 1 -B best.nn -a source.valid.txt target.valid.txt -d 0.5 -P -0.05 0.05 -w 5 -m 20 -n 20 -l 1 -A 0.8
62+
```
63+
64+
### [Decode] decode the top 10 for the source.valid.txt
65+
66+
```
67+
$EXEC -k 10 best.nn kbest.txt --print-score 1 -b 20 --decode-main-data-files source.valid.txt
68+
```
69+
70+
## Batch Mode
71+
`Batch Mode` means we will translate all the sentences in the `source.valid.txt` file with the same FSA file `fsa.txt`
72+
73+
### [Decode + FSA] Decode the top 10 with fsa
74+
75+
```
76+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --decode-main-data-files source.valid.txt
77+
```
78+
79+
### [Decode + FSA + Beam Info]
80+
To see the beam cells during decoding, use flag `--print-beam 1`
81+
```
82+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt
83+
```
84+
85+
### [Decode + Fsa + Beam Info + encourage-list + repeat-penalty + adjacent-repeat-penalty + alliteration + wordlen ]
86+
87+
Beside FSA, we also provide several other weights to control the output:
88+
89+
1. Encourage Lists and Encouange Weights: Each encourage list is a file contains either a list of words (like `enc1.txt`) or a list of words with weights (like `enc2.txt`). You feed the encourange lists by flag `--encourage-list enc1.txt enc2.txt` For each encourage list, you should also assign a weight by flag `--encourage-weight 1.0,-1.0`.
90+
2. Repeat penalty: to prevent producing repeated words during decoding. Use flag `--repeat-penalty -1.0`.
91+
3. Adjacent repeat penalty: to prevent producing consectuive repeated wrods. Use flag `--adjacent-repeat-penalty -1.0`.
92+
4. Alliteration: Use flag `--alliteration-weight 1.0`.
93+
5. Word length weight: Use flag `--wordlen-weight 1.0`.
94+
95+
Please refer the paper [Hafez: an Interactive Poetry Generation System](http://xingshi.me/data/pdf/ACL2017demo.pdf) for detailed description of these style controls.
96+
97+
Put all the flags together, we got:
98+
99+
```
100+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --repeat-penalty -1.0 --adjacent-repeat-penalty -1.0 --alliteration-weight 1.0 --wordlen-weight 1.0 --encourage-list enc1.txt enc2.txt --encourage-weight 1.0,-1.0
101+
```
102+
103+
To reproduce the results in [Hafez: an Interactive Poetry Generation System](http://xingshi.me/data/pdf/ACL2017demo.pdf), please also checkout the [Rhyme Generation code](https://github.com/Marjan-GH/Topical_poetry) and [Web Interface code](https://github.com/shixing/poem).
104+
105+
106+
## Interactive Mode
107+
108+
Usually, you want to decode different sentence with different FSA. With `Batch Mode`, you'll have to create a `soruce.txt` and reload the whole RNN model from disk for each sentence, which can takes 1-2 minutes.
109+
110+
Thus, we provide the `Interactive Mode`, where you can provide different `source.txt` and `fsa.txt` without reloading the RNN model.
111+
112+
To enable `Interactive Mode`, use flag `--interactive 1`
113+
114+
```
115+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --repeat-penalty -1.0 --adjacent-repeat-penalty -1.0 --interactive 1
116+
```
117+
Once loaded the RNN model, it will print the following instructions:
118+
```
119+
Please input k:<k> source_file:<source_file> fsa_file:<fsa_file> repetition:<repetition_weight> alliteration:<alliteration_weight> wordlen:<wordlen_weight> encourage_list_files:<file1>,<file2> encourage_weights:<weight1>,<weight2>
120+
```
121+
Follow the instruction and input the following in STDIN:
122+
```
123+
k:1 soruce_file:source.single.txt fsa_file:fsa repetition:-1.0 alliteration:1.0 wordlen:1.0 encourage_list_files:enc1.txt,enc2.txt encourage_weights:1.0,-1.0
124+
```
125+
It will decode the print the results in STDOUT. Then you can type another commend into STDIN to decode another sentence. NOTE: `source.single.txt` should contains only one sentence.
126+
127+
## Interactive Line Mode
128+
129+
This mode is specially for the interactive poem generation task where human and machine compose a poem line by line alternatively. Use flag `--interactive-line 1` to enable this mode.
130+
131+
```
132+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --interactive-line 1 --interactive 1
133+
```
134+
You can choose one of the following three commend to type in STDIN:
135+
136+
1. `source <source_file>` : process the source-side forward propagation.
137+
2. `words word1 word2 word3` feed the target-side RNN with words sequence `word1 owrd2 word3`. This is supposed to be the line that human composed.
138+
3. `fsaline <fsa_file> encourage_list_files:enc1.txt,enc2.txt encourage_weights:1.0,-1.0 repetition:0.0 alliteration:0.0 wordlen:0.0` Let the RNN to continue decode with FSA.
139+
140+
Both step 2 and 3 will start from the previous hidden states and cell states of target-side RNN.
141+
142+
# Decoding with Word Alignment
143+
144+
Suppose we are translating from French to English, we could use the word alignment information to speed up the decoding. Please find details in 5. [Speeding up Neural Machine Translation Decoding by Shrinking Run-time Vocabulary](http://xingshi.me/data/pdf/ACL2017short.pdf).
145+
146+
The commend to decode with word alignment information is:
147+
148+
```
149+
$EXEC --decode-main-data-files $SRC_TST -b 12 -L 200 -k 1 $fe_nn_attention $OUTPUT --target-vocab-shrink 2 --f2e-file $ALIGNMENT_FILE --target-vocab-cap $NCAP
150+
```
151+
where `$ALIGNMETN_FILE` is the file that contains word alignment information with the following format:
152+
```
153+
<Source Word Id> <Target Candidate Word Id 1> <Target Candidate Word Id 1> ... <Target Candidate Word Id 10>
154+
```
155+
Each line starts with `<Source Word Id>` and follows with `$NCAP` candidate target word ids. The source word id and target word id should be consistant with the word ids used by model `$fe_nn_attention`.
156+
157+
# Decoding with Locality Sensitive Hashing
158+
159+
To decode with Winer-Take-All LSH:
160+
161+
```
162+
$EXEC --decode-main-data-files $SRC_TST -L 100 -k 1 $fe_nn $OUTPUT --lsh-type 1 --WTA-K 16 --WTA-units-per-band 3 --WTA-W 500 --WTA-threshold 5 --target-vocab-policy 3 -b 12 > $LOG
163+
```

executable/ZOPH_RNN_XING

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version https://git-lfs.github.com/spec/v1
2+
oid sha256:4e95acec2d0089aee919025f7b7dcf93c399a0ec47cb6b006ade868a41984532
3+
size 126229976

scripts/compile.sh

+25-36
Original file line numberDiff line numberDiff line change
@@ -11,47 +11,36 @@
1111
#Boost version = 1.51.0 or 1.55.0
1212
#Any version of Eigen
1313

14-
source /usr/usc/cuda/8.0/setup.sh
14+
source /usr/usc/cuda/7.5/setup.sh
1515
source /usr/usc/gnu/gcc/4.9.3/setup.sh
1616

17-
PATH_TO_CUDA_INCLUDE=${PATH_TO_CUDA_INCLUDE:-"/usr/usc/cuda/8.0/include/"}
17+
PATH_TO_CUDA_INCLUDE=${PATH_TO_CUDA_INCLUDE:-"/usr/usc/cuda/7.5/include/"}
1818
PATH_TO_BOOST_INCLUDE=${PATH_TO_BOOST_INCLUDE:-"/usr/usc/boost/1.55.0/include/"}
19-
PATH_TO_CUDA_LIB_64=${PATH_TO_CUDA_LIB_64:-"/usr/usc/cuda/8.0/lib64/"}
19+
PATH_TO_CUDA_LIB_64=${PATH_TO_CUDA_LIB_64:-"/usr/usc/cuda/7.5/lib64/"}
2020
PATH_TO_BOOST_LIB=${PATH_TO_BOOST_LIB:-"/usr/usc/boost/1.55.0/lib/"}
2121
PATH_TO_CUDNN_V4_64=${PATH_TO_CUDNN_V4_64:-"/home/nlg-05/zoph/cudnn_v4/lib64/"}
2222
PATH_TO_EIGEN=${PATH_TO_EIGEN:-"/home/nlg-05/zoph/eigen/"}
2323
PATH_TO_CUDNN_INCLUDE=${PATH_TO_CUDNN_INCLUDE:-"/home/nlg-05/zoph/cudnn_v4/include/"}
2424

25-
#complie
26-
#-DCUDNN_STATIC
27-
#-DTIMER_DEBUG
28-
nvcc -DCUDNN_STATIC -O3 -arch=sm_35 \
29-
-Xcompiler -fopenmp \
30-
-I $PATH_TO_CUDA_INCLUDE \
31-
-I $PATH_TO_BOOST_INCLUDE \
32-
-I $PATH_TO_EIGEN \
33-
-I $PATH_TO_CUDNN_INCLUDE \
34-
-std=c++11 \
35-
-dc src/main.cu -o main.o
36-
37-
nvcc -c src/format.cc -o format.o
38-
39-
nvcc -arch=sm_35 -rdc=true -O3 main.o format.o -o ZOPH_RNN\
40-
${PATH_TO_BOOST_LIB}libboost_system.a \
41-
${PATH_TO_BOOST_LIB}libboost_filesystem.a \
42-
${PATH_TO_BOOST_LIB}libboost_program_options.a \
43-
${PATH_TO_BOOST_LIB}libboost_regex.a \
44-
${PATH_TO_CUDA_LIB_64}libcublas_static.a \
45-
${PATH_TO_CUDA_LIB_64}libcudadevrt.a \
46-
${PATH_TO_CUDA_LIB_64}libcudart_static.a \
47-
${PATH_TO_CUDA_LIB_64}libculibos.a \
48-
${PATH_TO_CUDA_LIB_64}libcurand_static.a \
49-
${PATH_TO_CUDA_LIB_64}libcusolver_static.a \
50-
${PATH_TO_CUDA_LIB_64}libcusparse_static.a \
51-
${PATH_TO_CUDA_LIB_64}libnppc_static.a \
52-
${PATH_TO_CUDA_LIB_64}libnppi_static.a \
53-
${PATH_TO_CUDA_LIB_64}libnpps_static.a \
54-
${PATH_TO_CUDNN_V4_64}libcudnn_static.a \
55-
56-
57-
######
25+
nvcc -DCUDNN_STATIC -O3 \
26+
-g -Xcompiler -fopenmp \
27+
-I $PATH_TO_CUDA_INCLUDE \
28+
-I $PATH_TO_BOOST_INCLUDE \
29+
${PATH_TO_BOOST_LIB}libboost_system.a \
30+
-I $PATH_TO_CUDNN_INCLUDE \
31+
${PATH_TO_BOOST_LIB}libboost_filesystem.a \
32+
${PATH_TO_BOOST_LIB}libboost_program_options.a \
33+
-I $PATH_TO_EIGEN \
34+
-std=c++11 \
35+
${PATH_TO_CUDA_LIB_64}libcublas_static.a \
36+
${PATH_TO_CUDA_LIB_64}libcudadevrt.a \
37+
${PATH_TO_CUDA_LIB_64}libcudart_static.a \
38+
${PATH_TO_CUDA_LIB_64}libculibos.a \
39+
${PATH_TO_CUDA_LIB_64}libcurand_static.a \
40+
${PATH_TO_CUDA_LIB_64}libcusolver_static.a \
41+
${PATH_TO_CUDA_LIB_64}libcusparse_static.a \
42+
${PATH_TO_CUDA_LIB_64}libnppc_static.a \
43+
${PATH_TO_CUDA_LIB_64}libnppi_static.a \
44+
${PATH_TO_CUDA_LIB_64}libnpps_static.a \
45+
${PATH_TO_CUDNN_V4_64}libcudnn_static.a \
46+
src/main.cu -o ZOPH_RNN

scripts/compile.xing.sh

+58
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
#Written by Barret Zoph, for questions email [email protected]
2+
#Futher edit by Xing Shi ([email protected])
3+
4+
#Compilation script for compiling ZOPH_RNN
5+
#The following 7 environmental variables must be set in order for compilation
6+
#The default arguements are examples of what the paths should look like
7+
#For the dependencies the following are required:
8+
#-----------------------------------------------
9+
#cuda version >= 8.0
10+
#One of the following gcc versions >= 4.9.3
11+
#CuDNN version = 4
12+
#Boost version = 1.51.0 or 1.55.0
13+
#Any version of Eigen
14+
15+
source /usr/usc/cuda/8.0/setup.sh
16+
source /usr/usc/gnu/gcc/4.9.3/setup.sh
17+
18+
PATH_TO_CUDA_INCLUDE=${PATH_TO_CUDA_INCLUDE:-"/usr/usc/cuda/8.0/include/"}
19+
PATH_TO_BOOST_INCLUDE=${PATH_TO_BOOST_INCLUDE:-"/usr/usc/boost/1.55.0/include/"}
20+
PATH_TO_CUDA_LIB_64=${PATH_TO_CUDA_LIB_64:-"/usr/usc/cuda/8.0/lib64/"}
21+
PATH_TO_BOOST_LIB=${PATH_TO_BOOST_LIB:-"/usr/usc/boost/1.55.0/lib/"}
22+
PATH_TO_CUDNN_V4_64=${PATH_TO_CUDNN_V4_64:-"/home/nlg-05/zoph/cudnn_v4/lib64/"}
23+
PATH_TO_EIGEN=${PATH_TO_EIGEN:-"/home/nlg-05/zoph/eigen/"}
24+
PATH_TO_CUDNN_INCLUDE=${PATH_TO_CUDNN_INCLUDE:-"/home/nlg-05/zoph/cudnn_v4/include/"}
25+
26+
#complie
27+
28+
#-DTIMER_DEBUG
29+
nvcc -DCUDNN_STATIC -O3 -arch=sm_35 \
30+
-Xcompiler -fopenmp \
31+
-I $PATH_TO_CUDA_INCLUDE \
32+
-I $PATH_TO_BOOST_INCLUDE \
33+
-I $PATH_TO_EIGEN \
34+
-I $PATH_TO_CUDNN_INCLUDE \
35+
-std=c++11 \
36+
-dc src/main.cu -o main.o
37+
38+
nvcc -c src/format.cc -o format.o
39+
40+
nvcc -arch=sm_35 -rdc=true -O3 main.o format.o -o ZOPH_RNN_XING\
41+
${PATH_TO_BOOST_LIB}libboost_system.a \
42+
${PATH_TO_BOOST_LIB}libboost_filesystem.a \
43+
${PATH_TO_BOOST_LIB}libboost_program_options.a \
44+
${PATH_TO_BOOST_LIB}libboost_regex.a \
45+
${PATH_TO_CUDA_LIB_64}libcublas_static.a \
46+
${PATH_TO_CUDA_LIB_64}libcudadevrt.a \
47+
${PATH_TO_CUDA_LIB_64}libcudart_static.a \
48+
${PATH_TO_CUDA_LIB_64}libculibos.a \
49+
${PATH_TO_CUDA_LIB_64}libcurand_static.a \
50+
${PATH_TO_CUDA_LIB_64}libcusolver_static.a \
51+
${PATH_TO_CUDA_LIB_64}libcusparse_static.a \
52+
${PATH_TO_CUDA_LIB_64}libnppc_static.a \
53+
${PATH_TO_CUDA_LIB_64}libnppi_static.a \
54+
${PATH_TO_CUDA_LIB_64}libnpps_static.a \
55+
${PATH_TO_CUDNN_V4_64}libcudnn_static.a \
56+
57+
58+
######

scripts/fsa/demo.sh

+16-11
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
11
# we have a simple task, translate number into letters
22

3+
# first run generate_fsa.py to generate the fsa file and training and validation data set
4+
5+
python generate_fsa.py
6+
7+
# it will generate the following:
8+
39
# source.train.txt : 6 number sentences.
410
# target.train.txt : 6 letter sentences.
511

@@ -9,36 +15,35 @@
915
# and a simple fsa file which forces the output to have following words:
1016
# ["lstm","is","great","slow","luckily","we","make","it","fast","enough","and","with","fsa"]
1117

12-
source /usr/usc/cuda/7.0/setup.sh
13-
source /usr/usc/boost/1.55.0/setup.sh
14-
source /usr/usc/gnu/gcc/4.8.1/setup.sh
18+
EXEC=../../../executable/ZOPH_RNN_XING
1519

1620
# [Train] train the translation model
17-
../../ZOPH_RNN -t source.train.txt target.train.txt model.nn -L 15 -H 40 -N 1 -M 0 1 -B best.nn -a source.valid.txt target.valid.txt -d 0.5 -P -0.05 0.05 -w 5 -m 20 -n 20 -l 1 -A 0.8
21+
$EXEC -t source.train.txt target.train.txt model.nn -L 15 -H 40 -N 1 -M 0 1 -B best.nn -a source.valid.txt target.valid.txt -d 0.5 -P -0.05 0.05 -w 5 -m 20 -n 20 -l 1 -A 0.8
1822

1923
# [Decode] decode the top 10 for the source.valid.txt
20-
../../ZOPH_RNN -k 10 best.nn kbest.txt --print-score 1 -b 20 --decode-main-data-files source.valid.txt
24+
$EXEC -k 10 best.nn kbest.txt --print-score 1 -b 20 --decode-main-data-files source.valid.txt
2125

2226
# [Decode + Fsa] decode the top 10 with fsa integration
23-
../../ZOPH_RNN -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --decode-main-data-files source.valid.txt
27+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --decode-main-data-files source.valid.txt
2428

2529
# [Decode + Fsa + Beam Info] To see the beam cells during decoding: --print-beam 1
26-
../../ZOPH_RNN -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt
30+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt
2731

2832
# [Decode + Fsa + Beam Info + encourage-list + repeat-penalty + adjacent-repeat-penalty + alliteration + wordlen ]
29-
../../ZOPH_RNN -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --repeat-penalty -1.0 --adjacent-repeat-penalty -1.0 --alliteration-weight 1.0 --wordlen-weight 1.0 --encourage-list enc1.txt enc2.txt --encourage-weight 1.0,-1.0
33+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --repeat-penalty -1.0 --adjacent-repeat-penalty -1.0 --alliteration-weight 1.0 --wordlen-weight 1.0 --encourage-list enc1.txt enc2.txt --encourage-weight 1.0,-1.0
3034

3135
# [Interactive mode] : --interactive 1
32-
../../ZOPH_RNN -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --repeat-penalty -1.0 --adjacent-repeat-penalty -1.0 --interactive 1
36+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --repeat-penalty -1.0 --adjacent-repeat-penalty -1.0 --interactive 1
3337
# it will print the following commend:
3438
# Please input k:<k> source_file:<source_file> fsa_file:<fsa_file> repetition:<repetition_weight> alliteration:<alliteration_weight> wordlen:<wordlen_weight> encourage_list_files:<file1>,<file2> encourage_weights:<weight1>,<weight2>
3539
# Note:
3640
# <repetition> is the same as --repeat-penalty, and it will add these two weights;
3741
# the command line should contains --fsa <fsa_file> and --decode-main-data-files <source_file>, both fsa_file and source_file should exist and are valid fsa_file and source file, although you don't really use them in the interactive mode.
3842

43+
# [Interactive-line mode] : --interactive 1 --interactive-line 1
44+
$EXEC -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --interactive-line 1 --interactive-line 1
45+
3946

40-
# [Interactive-line mode] : --interactive 1
41-
../../ZOPH_RNN -k 10 best.nn kbest_fsa.txt --print-score 1 -b 5 --fsa fsa.txt --print-beam 1 --decode-main-data-files source.valid.txt --interactive-line 1
4247

4348

4449

scripts/fsa/enc1.txt

+2
Original file line numberDiff line numberDiff line change
@@ -1 +1,3 @@
11
l
2+
s
3+
t

scripts/fsa/enc2.txt

+2-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
i
1+
i 0.3
2+
m -0.5

0 commit comments

Comments
 (0)