-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathglossary.valid.json
340 lines (340 loc) · 11.1 KB
/
glossary.valid.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
{
"source_id": "glossary",
"url": "https://developers.google.com/machine-learning/glossary",
"sections": [
{
"name": "A/B testing",
"split": "valid",
"text": [
[
"A/B testing",
"A/B 测试"
],
[
"A statistical way of comparing two (or more) techniques, typically an incumbent against a new rival.",
"一种统计方法,用于将两种或多种技术进行比较,通常是将当前采用的技术与新技术进行比较。"
],
[
"A/B testing aims to determine not only which technique performs better but also to understand whether the difference is statistically significant.",
"A/B 测试不仅旨在确定哪种技术的效果更好,而且还有助于了解相应差异是否具有显著的统计意义。"
],
[
"A/B testing usually considers only two techniques using one measurement, but it can be applied to any finite number of techniques and measures.",
"A/B 测试通常是采用一种衡量方式对两种技术进行比较,但也适用于任意有限数量的技术和衡量方式。"
]
]
},
{
"name": "batch size",
"split": "valid",
"text": [
[
"batch size",
"批次大小"
],
[
"The number of examples in a batch.",
"一个批次中的样本数。"
],
[
"For example, the batch size of SGD is 1, while the batch size of a mini-batch is usually between 10 and 1000.",
"例如,SGD 的批次大小为 1,而小批次的大小通常介于 10 到 1000 之间。"
],
[
"Batch size is usually fixed during training and inference; however, TensorFlow does permit dynamic batch sizes.",
"批次大小在训练和推断期间通常是固定的;不过,TensorFlow 允许使用动态批次大小。"
]
]
},
{
"name": "binning",
"split": "valid",
"text": [
[
"binning",
"分箱"
],
[
"See bucketing.",
"请参阅分桶。"
]
]
},
{
"name": "candidate sampling",
"split": "valid",
"text": [
[
"candidate sampling",
"候选采样"
],
[
"A training-time optimization in which a probability is calculated for all the positive labels, using, for example, softmax, but only for a random sample of negative labels.",
"一种训练时进行的优化,会使用某种函数(例如 softmax)针对所有正类别标签计算概率,但对于负类别标签,则仅针对其随机样本计算概率。"
],
[
"For example, if we have an example labeled beagle and dog candidate sampling computes the predicted probabilities and corresponding loss terms for the beagle and dog class outputs in addition to a random subset of the remaining classes (cat, lollipop, fence).",
"例如,如果某个样本的标签为“小猎犬”和“狗”,则候选采样将针对“小猎犬”和“狗”类别输出以及其他类别(猫、棒棒糖、栅栏)的随机子集计算预测概率和相应的损失项。"
],
[
"The idea is that the negative classes can learn from less frequent negative reinforcement as long as positive classes always get proper positive reinforcement, and this is indeed observed empirically.",
"这种采样基于的想法是,只要正类别始终得到适当的正增强,负类别就可以从频率较低的负增强中进行学习,这确实是在实际中观察到的情况。"
],
[
"The motivation for candidate sampling is a computational efficiency win from not computing predictions for all negatives.",
"候选采样的目的是,通过不针对所有负类别计算预测结果来提高计算效率。"
]
]
},
{
"name": "convergence",
"split": "valid",
"text": [
[
"convergence",
"收敛"
],
[
"Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations.",
"通俗来说,收敛通常是指在训练期间达到的一种状态,即经过一定次数的迭代之后,训练损失和验证损失在每次迭代中的变化都非常小或根本没有变化。"
],
[
"In other words, a model reaches convergence when additional training on the current data will not improve the model.",
"也就是说,如果采用当前数据进行额外的训练将无法改进模型,模型即达到收敛状态。"
],
[
"In deep learning, loss values sometimes stay constant or nearly so for many iterations before finally descending, temporarily producing a false sense of convergence.",
"在深度学习中,损失值有时会在最终下降之前的多次迭代中保持不变或几乎保持不变,暂时形成收敛的假象。"
],
[
"See also early stopping.",
"另请参阅早停法。"
],
[
"See also Boyd and Vandenberghe, Convex Optimization.",
"另请参阅 Boyd 和 Vandenberghe 合著的 Convex Optimization(《凸优化》)。"
]
]
},
{
"name": "critic",
"split": "valid",
"text": []
},
{
"name": "data analysis",
"split": "valid",
"text": []
},
{
"name": "DataFrame",
"split": "valid",
"text": []
},
{
"name": "decision threshold",
"split": "valid",
"text": []
},
{
"name": "dense feature",
"split": "valid",
"text": []
},
{
"name": "dimensions",
"split": "valid",
"text": []
},
{
"name": "disparate impact",
"split": "valid",
"text": []
},
{
"name": "disparate treatment",
"split": "valid",
"text": []
},
{
"name": "downsampling",
"split": "valid",
"text": []
},
{
"name": "epsilon greedy policy",
"split": "valid",
"text": []
},
{
"name": "equalized odds",
"split": "valid",
"text": []
},
{
"name": "fairness metric",
"split": "valid",
"text": []
},
{
"name": "false negative (FN)",
"split": "valid",
"text": []
},
{
"name": "feature cross",
"split": "valid",
"text": []
},
{
"name": "graph",
"split": "valid",
"text": []
},
{
"name": "i.i.d.",
"split": "valid",
"text": []
},
{
"name": "inter-rater agreement",
"split": "valid",
"text": []
},
{
"name": "IoU",
"split": "valid",
"text": []
},
{
"name": "Keras",
"split": "valid",
"text": []
},
{
"name": "keypoints",
"split": "valid",
"text": []
},
{
"name": "Kernel Support Vector Machines (KSVMs)",
"split": "valid",
"text": []
},
{
"name": "labeled example",
"split": "valid",
"text": []
},
{
"name": "layer",
"split": "valid",
"text": []
},
{
"name": "Layers API (tf.layers)",
"split": "valid",
"text": []
},
{
"name": "loss",
"split": "valid",
"text": []
},
{
"name": "minority class",
"split": "valid",
"text": []
},
{
"name": "noise",
"split": "valid",
"text": []
},
{
"name": "P",
"split": "valid",
"text": []
},
{
"name": "perceptron",
"split": "valid",
"text": []
},
{
"name": "performance",
"split": "valid",
"text": []
},
{
"name": "policy",
"split": "valid",
"text": []
},
{
"name": "pooling",
"split": "valid",
"text": []
},
{
"name": "quantile",
"split": "valid",
"text": []
},
{
"name": "rank (Tensor)",
"split": "valid",
"text": []
},
{
"name": "Rectified Linear Unit (ReLU)",
"split": "valid",
"text": []
},
{
"name": "replay buffer",
"split": "valid",
"text": []
},
{
"name": "scalar",
"split": "valid",
"text": []
},
{
"name": "state",
"split": "valid",
"text": []
},
{
"name": "subsampling",
"split": "valid",
"text": []
},
{
"name": "target",
"split": "valid",
"text": []
},
{
"name": "TPU chip",
"split": "valid",
"text": []
},
{
"name": "TPU type",
"split": "valid",
"text": []
},
{
"name": "transfer learning",
"split": "valid",
"text": []
},
{
"name": "vanishing gradient problem",
"split": "valid",
"text": []
}
]
}