Using TriScale for ETX #10
-
Hi, I was playing around with ETX as metric. I define ETX as attempted transmissions per successful transmission, i.e. num. TX attempts divided by num. successful TXs. Calculating the total ETX for an entire run is thus trivial. However, calculating it over time such that I could utilize TriScale metric-analysis, I met some challenges: The core issue is that transmissions may not succeed, leading to 0 in the divisor. For example, an intuitive approach would be to use ETX per transaction (MAC-layer transmissions and retransmissions of the same packet). Yet the MAC will typically give up, e.g. after 8 attempts - what ETX to set for such a transaction? Similar situations may occur if calculating ETX e.g. per minute. One option could be to exclude such transactions, and rather catch them in a different metric (like transaction-loss) - this would be somewhat analogous to end-to-end latency and delivery ratio. This does seem precise, but it disperses the information across two values, making comparisons harder. Would love to hear thoughts on this as I suspect there is some angle on the entire problem I am missing. Note that the issue is probably transferable to other metrics as well. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
TL;DR Indeed, you touch upon something interesting here: How to deal with unbounded metric values? As you noted, it is the same problem with the delay for packets that don't get delivered. How is this dealt with in practice? Simply by using two different metrics: packet delay, and packet reception rate. It's simple, and clean from an analysis point of view, but the drawback is that you now have two different performance dimensions (delay and reliability) which leads to a set of Pareto-optimal solutions--that is, protocols what will perform better than the others in one of the two dimensions. It makes head-to-head comparison more precise, but less conclusive. Another approach is to do some data transformation. Keeping the example of delay, one could analyze 1/d instead of d. Since delay is strictly positive, you would get rid of the infinity problem, however you could still get arbitrarily high values. This is not ideal, as it will bias the data scaling that is performed before the convergence test. Also, one should think twice before introducing "yet another metric" that readers might have difficulty to interpret. Though this can still be okay if you use the transformation for the convergence test only. More generally, I'd (strongly) recommend asking oneself, "what is the performance dimension that I really want to capture?" In your case, why are you looking at the ETX? What statement do you want to make with it?
Hope that helps! |
Beta Was this translation helpful? Give feedback.
TL;DR
If you struggle when using a given metric, then you should consider changing it.
Indeed, you touch upon something interesting here: How to deal with unbounded metric values?
As you noted, it is the same problem with the delay for packets that don't get delivered. How is this dealt with in practice? Simply by using two different metrics: packet delay, and packet reception rate. It's simple, and clean from an analysis point of view, but the drawback is that you now have two different performance dimensions (delay and reliability) which leads to a set of Pareto-optimal solutions--that is, protocols what will perform better than the others in one of the two dimensions. It makes head-to…