-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathelectre_tri.py
2160 lines (1649 loc) · 72.4 KB
/
electre_tri.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Oct 18 19:24:18 2024
Last modified on Thu Dec 05 08:00:13 2024
@author: cghiaus
ELECTRE Tri-B
In ELECTRE Tri-B each category is characterized
by two reference profiles corresponding to the limits of this category.
In ELECTRE Tri-C each category is characterized by one
reference profile only, being representative of the category
[Corente et al; 2016].
References
Almeida-Dias, J., Figueira, J. R., & Roy, B. (2010). Electre Tri-C: A multiple
criteria sorting method based on characteristic reference actions.
European Journal of Operational Research, 204(3), 565-580.
https://doi.org/10.1016/j.ejor.2009.10.018
https://hal.science/hal-00907583v1/document
Mousseau, V., Slowinski, R., & Zielniewicz, P. (1999). ELECTRE TRI 2.0
Methodological guide and user’s manual. Universite Paris Dauphine,
Document du LAMSADE, 111, 263-275.
https://www.lamsade.dauphine.fr/mcda/biblio/PDF/mous3docl99.pdf
Mousseau, V., Slowinski, R., & Zielniewicz, P. (2000). A user-oriented
implementation of the ELECTRE-TRI method integrating preference elicitation
support. Computers & operations research, 27(7-8), 757-777.
https://doi.org/10.1016/S0305-0548(99)00117-3
https://www.lamsade.dauphine.fr/mcda/biblio/PDF/mous3cor00.pdf
J. Almeida-Dias , J. R. Figueira , B. Roy (2010) A multiple criteria sorting
method defining each category by several characteristic reference actions:
The Electre Tri-nC method, Cahier du LAMSADE 294, Université Paris Daufine,
CNRS
https://hal.science/hal-01511223/document
Corrente, S., Greco, S., & Słowiński, R. (2016). Multiple criteria hierarchy
process for ELECTRE Tri methods. European Journal of Operational Research,
252(1), 191-203.
https://doi.org/10.1016/j.ejor.2015.12.053
https://pure.port.ac.uk/ws/portalfiles/portal/5001301/GRECO_Multiple_Criteria_Hierarchy_Process_for_ELECTRE_Tri_methods_Postprint.pdf
Figueira, J. R., Mousseau, V., & Roy, B. (2016). ELECTRE methods. Multiple
criteria decision analysis: State of the art surveys, 155-185.
https://www.lamsade.dauphine.fr/mcda/biblio/PDF/JFVMBR2005.pdf
Corrente, S., Greco, S., & Słowiński, R. (2016). Multiple criteria hierarchy
process for ELECTRE Tri methods. European Journal of Operational Research,
252(1), 191-203.
https://doi.org/10.1016/j.ejor.2015.12.053
https://pure.port.ac.uk/ws/portalfiles/portal/5001301/GRECO_Multiple_Criteria_Hierarchy_Process_for_ELECTRE_Tri_methods_Postprint.pdf
Baseer, M., Ghiaus, C., Viala, R., Gauthier, N., & Daniel, S. (2023).
pELECTRE-Tri: Probabilistic ELECTRE-Tri Method—Application for the
Energy Renovation of Buildings. Energies, 16(14), 5296.
https://doi.org/10.3390/en16145296
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.set_option('future.no_silent_downcasting', True)
def read_electre_tri_base(filename):
"""Reads the data of the ELECTRE Tri problem.
Args:
filename (str): Name of .csv file containing the data of the problem.
Returns:
A (DataFrame): Performance matrix of alternatives (rows) for
criteria (columns).
B (DataFrame): Base profiles in ascending order for criteria (columns).
T (DataFrame): Indifference (q), preference (p) and veto (v) thresholds
for each criterion (column).
w (Series): Weight for each criterion.
Example
-------
>>> data_file = './data/simple_example.csv'
>>> A, B, T, w = read_electre_tri_base(data_file)
>>> ...
where `simple_example.csv` is:
.. code-block:: none
type, profile, c1, c2
A, a1, 8.5, 18
A, a2, 14, 16
A, a3, 5, 27
B, b1, 10, 15
B, b2, 15, 20
T, q, 1, 2
T, p, 2, 4
T, v, 4, 8
w, , 0.7, 0.3
"""
# Read the CSV file
df = pd.read_csv(filename, header=0)
# Extract A
A = df[df.iloc[:, 0] == 'A'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'A'].iloc[:, 1])
A.index.name = None
# Extract B
B = df[df.iloc[:, 0] == 'B'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'B'].iloc[:, 1])
B.index.name = None
# Extract T
T = df[df.iloc[:, 0] == 'T'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'T'].iloc[:, 1])
T.index.name = None
# Extract w
w = pd.Series(df[df.iloc[:, 0] == 'w'].iloc[0, 2:].dropna())
w.name = None # Remove the name from the Series
return A, B, T, w
def read_pelectre_tri_base(filename):
"""Reads the data of the pELECTRE Tri problem.
Args:
filename (str): Name of .csv file containing the data of the problem.
Returns:
A (DataFrame): Performance matrix of alternatives (rows) for
criteria (columns).
S (DataFrame): Standard deviation of performance matrix
of alternatives (rows) for criteria (columns).
B (DataFrame): Base profiles in ascending order for criteria (columns).
T (DataFrame): Indifference (q), preference (p) and veto (v) thresholds
for each criterion (column).
w (Series): Weight for each criterion.
Example
-------
>>> data_file = './data/simple_example_std.csv'
>>> A, S, B, T, w = read_pelectre_tri_base(data_file)
>>> ...
where `simple_example.csv` is:
.. code-block:: none
type, profile, c1, c2
A, a1, 8.5, 18
A, a2, 14, 16
A, a3, 5, 27
S, a1, 0.85, 1.8
S, a2, 1.4, 1.6
S, a3, 0.5, 2.7
B, b1, 10, 15
B, b2, 15, 20
T, q, 1, 2
T, p, 2, 4
T, v, 4, 8
w, , 0.7, 0.3
"""
# Read the CSV file
df = pd.read_csv(filename, header=0)
# Extract A
A = df[df.iloc[:, 0] == 'A'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'A'].iloc[:, 1])
A.index.name = None
# Extract S
S = df[df.iloc[:, 0] == 'S'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'S'].iloc[:, 1])
S.index.name = None
# Extract B
B = df[df.iloc[:, 0] == 'B'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'B'].iloc[:, 1])
B.index.name = None
# Extract T
T = df[df.iloc[:, 0] == 'T'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'T'].iloc[:, 1])
T.index.name = None
# Extract w
w = pd.Series(df[df.iloc[:, 0] == 'w'].iloc[0, 2:].dropna())
w.name = None # Remove the name from the Series
return A, S, B, T, w
def read_electre_tri_level(filename):
""" Reads the data for worst and best possible base profiles.
Args:
filename (str): Name of .csv file containing the data of the problem.
Returns:
A (DataFrame): Performance matrix of alternatives (rows) for
criteria (columns).
L (DataFrame): Worst and best base profiles in ascending order for
criteria (columns).
w (Series): Weight for each criterion.
"""
# Read the CSV file
df = pd.read_csv(filename, header=0)
# Extract A
A = df[df.iloc[:, 0] == 'A'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'A'].iloc[:, 1])
A.index.name = None
# Extract L
L = df[df.iloc[:, 0] == 'L'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'L'].iloc[:, 1])
L.index.name = None
# Extract w
w = pd.Series(df[df.iloc[:, 0] == 'w'].iloc[0, 2:].dropna())
w.name = None # Remove the name from the Series
return A, L, w
def read_pelectre_tri_level(filename):
""" Reads the data for worst and best possible base profiles.
Args:
filename (str): Name of .csv file containing the data of the problem.
Returns:
A (DataFrame): Performance matrix of alternatives (rows) for
criteria (columns).
S (DataFrame): Standard deviation of performance matrix
of alternatives (rows) for criteria (columns).
L (DataFrame): Worst and best base profiles in ascending order for
criteria (columns).
w (Series): Weight for each criterion.
"""
# Read the CSV file
df = pd.read_csv(filename, header=0)
# Extract A
A = df[df.iloc[:, 0] == 'A'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'A'].iloc[:, 1])
A.index.name = None
# Extract S
S = df[df.iloc[:, 0] == 'S'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'S'].iloc[:, 1])
S.index.name = None
# Extract L
L = df[df.iloc[:, 0] == 'L'].iloc[:, 2:].set_index(
df[df.iloc[:, 0] == 'L'].iloc[:, 1])
L.index.name = None
# Extract w
w = pd.Series(df[df.iloc[:, 0] == 'w'].iloc[0, 2:].dropna())
w.name = None # Remove the name from the Series
return A, S, L, w
def base_profile(L, n_base_profile=4):
"""Base profiles for each criterion.
The range between the best and the worst possible levels `L` is divided
in equidistant `n_base_profile` resulting in
`n_base_profile + 1` categories.
Args:
L (DataFrame): Worst and best possible base profiles in
ascending order for criteria (columns).
n_base_profile (int, optional): Number of base profiles. Defaults to 4.
Returns:
B (DataFrame): Base profiles in ascending order for criteria (columns).
"""
# Calculate the range for each column
ranges = L.loc['best'] - L.loc['worst']
# Create the percentages for the profiles
percentages = np.linspace(0, 1, n_base_profile + 2)[1:-1]
# Create the base profiles
B = pd.DataFrame({
col: [L.loc['worst', col] + p * ranges[col] for p in percentages]
for col in L.columns
}, index=[f'b{i+1}' for i in range(len(percentages))])
return B
def threshold(B, threshold_percent=[0.10, 0.25, 0.50]):
"""
Indifference (q), preference (p), and veto (v) thresholds as a percentage
of the range between two consecutive equidistant base profiles.
Args:
B (DataFrame): Base profiles in ascending order for criteria (columns).
threshold_percent (list, optional): Values of indifference (q),
preference (p) and veto (v) thresholds as a percentage of
the range between two consecutive equidistant base profiles.
Defaults to [0.10, 0.25, 0.50].
Returns:
T (DataFrame): Indifference (q), preference (p) and veto (v)
thresholds for each criterion (column).
"""
T = pd.DataFrame(index=['q', 'p', 'v'], columns=B.columns)
for col in B.columns:
# Calculate the differences between consecutive rows in the column
differences = B[col].diff().dropna()
# Calculate thresholds for q, p, and v based on the percentages
T.at['q', col] = threshold_percent[0] * differences.mean()
T.at['p', col] = threshold_percent[1] * differences.mean()
T.at['v', col] = threshold_percent[2] * differences.mean()
T = T.astype(float)
return T
def partial_concordance(A, B, T):
"""Partial concordance between profiles `a` and `b` for each criterion `c`.
Truth value (between 0 and 1) of the concordance (i.e. agreement)
with the statement:
*a outranks b for criterion c*
where "outranks" means "is at least as good as".
In ELECTRE Tri, two partial concordances are calculated:
- between alternatives a and base profiles b;
- between base profiles b and alternatives a.
Args:
A (DataFrame): Performance matrix of alternatives (rows) for
criteria (columns).
B (DataFrame): Base profiles in ascending order for criteria (columns).
T (DataFrame): Indifference (q), preference (p) and veto (v) thresholds
for each criterion (columns).
Returns:
con_ab (DataFrame): Partial (or local) concordance between
alternatives `a` and base profiles `b` for each criterion `c`.
con_ab has `criteria` and `base` as indexes and
`alternatives` as columns.
con_ba (DataFrame): Partial (or local) concordance between
base profiles `b` and alternatives `a` for each criterion `c`.
con_ba has `criteria` and `base` as indexes and
`alternatives` as columns..
Let's note:
- a : value of alternative a for a given criterion c
- b : value of base b for the same criterion c
- q : indifference threshold of b for criterion c
- p : preference threshold of b for criterion c
Partial concordance between a and b, con_ab, is:
- = 1, if a >= b - q
- = 0, if a < b - p
- = (a - b + p) / (p - q), otherwise
Partial concordance between b and a, con_ab, is:
- = 1, if b >= a - q
- = 0, if b < a - p
- = (b - a + p) / (p - q), otherwise
Example
-------
For `data_file.csv`:
+---+----+-----+----+
| | | c1 | c2 |
+===+====+=====+====+
| A | a1 | 8.5 | 18 |
+---+----+-----+----+
| A | a2 | 14 | 16 |
+---+----+-----+----+
| A | a3 | 5 | 27 |
+---+----+-----+----+
| B | b1 | 10 | 15 |
+---+----+-----+----+
| B | b2 | 15 | 20 |
+---+----+-----+----+
| T | q | 1 | 2 |
+---+----+-----+----+
| T | p | 2 | 4 |
+---+----+-----+----+
| T | v | 4 | 8 |
+---+----+-----+----+
| w | | 0.7 | 0.3|
+---+----+-----+----+
.. code-block:: none
type, profile, c1, c2
A, a1, 8.5, 18
A, a2, 14, 16
A, a3, 5, 27
B, b1, 10, 15
B, b2, 15, 20
T, q, 1, 2
T, p, 2, 4
T, v, 4, 8
w, , 0.7, 0.3
Partial concordance between alternatives and base profiles, `con_ab`:
+----------+------+-----+-----+-----+
| | | a1 | a2 | a3 |
+==========+======+=====+=====+=====+
| criteria | base | | | |
+----------+------+-----+-----+-----+
| c1 | b1 | 0.5 | 1 | 0 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 1 | 0 |
+----------+------+-----+-----+-----+
| c2 | b1 | 1 | 1 | 1 |
+----------+------+-----+-----+-----+
| | b2 | 1 | 0 | 1 |
+----------+------+-----+-----+-----+
Consider column a1:
con_ab[(c1, b1), a1] = 0.5 means that it is partly true that a1 outranks b1
on criterion c1, i.e., on c1, a1 is between indifference (b1 - q ) and
preference (b1 - p) boundaries of b1.
con_ab[(c1, b2), a1] = 0 means that it not true that a1 outranks b2
on criterion c1, i.e., on c1, a1 is bellow preference
boundary of b1, b1 - p.
con_ab[(c2, b2), a1] = 1 means that it true that a1 outranks b1
on criterion c2, i.e., on c2, a1 is above indifference
boundary of b1, b1 - q.
Partial concordance between base profiles and alternatives, `con_ba`:
+----------+------+-----+-----+-----+
| | | a1 | a2 | a3 |
+==========+======+=====+=====+=====+
| criteria | base | | | |
+----------+------+-----+-----+-----+
| c1 | b1 | 1 | 0 | 1 |
+----------+------+-----+-----+-----+
| | b2 | 1 | 1 | 1 |
+----------+------+-----+-----+-----+
| c2 | b1 | 0.5 | 1 | 0 |
+----------+------+-----+-----+-----+
| | b2 | 1 | 1 | 0 |
+----------+------+-----+-----+-----+
Consider row (c2, b1):
con_ab[(c2, b1), a1] = 0.5 means that it is partly true that b1 outranks a1
on criterion c2, i.e., on c1, b1 is between indifference (a1 - q ) and
preference (a1 - p) boundaries of a1.
con_ab[(c2, b1), a2] = 1 means that it true that b1 outranks a2
on criterion c2, i.e., on c2, b1 is above indifference
boundary of a1, a1 - q.
con_ab[(c2, b1), a3] = 0 means that it not true that b1 outranks a3
on criterion c2, i.e., on c2, b1 is bellow preference
boundary of a3, a3 - p.
"""
# Initialize the result DataFrame
index = pd.MultiIndex.from_product([A.columns, B.index],
names=['criteria', 'base'])
con_ab = pd.DataFrame(index=index, columns=A.index)
con_ba = pd.DataFrame(index=index, columns=A.index)
for criterion in A.columns:
for base in B.index:
for alternative in A.index:
a = A.loc[alternative, criterion]
b = B.loc[base, criterion]
q = T.loc['q', criterion]
p = T.loc['p', criterion]
# partial concordance (a_i, b_k) for c_j
if a >= b - q:
con = 1
elif a < b - p:
con = 0
else:
con = (a - b + p) / (p - q)
con_ab.loc[(criterion, base), alternative] = con
# partial concordance (b_k, a_i) for c_j
if b >= a - q:
con = 1
elif b < a - p:
con = 0
else:
con = (b - a + p) / (p - q)
con_ba.loc[(criterion, base), alternative] = con
# Replace NaN with 0 and round to 3 decimal places
con_ab = con_ab.fillna(0).round(3)
con_ba = con_ba.fillna(0).round(3)
return con_ab, con_ba
def discordance(A, B, T):
"""Partial discordance between profiles `a` and `b` for each criterion `c`.
Truth value (between 0 and 1) of the discordance (i.e. disagreement)
with the statement:
*a outranks b for criterion c*
where "outranks" means "is at least as good as".
Args:
A (DataFrame): Performance matrix of alternatives (rows) for
criteria (columns).
B (DataFrame): Base profiles in ascending order for criteria (columns).
T (DataFrame): Indifference (q), preference (p) and veto (v) thresholds
for each criterion (columns).
Returns:
dis_ab (DataFrame): Partial (or local) discordance between
alternatives `a` and base profiles `b` for each criterion `c`. d_ab
has `criteria` and `base` as indexes and `alternatives` as columns.
dis_ba (DataFrame): Partial (or local) discordance between
base profiles `b` and alternatives `a` for each criterion `c`. d_ba
has `criteria` and `base` as indexes and `alternatives` as columns.
Let's note:
- a : value of alternative a for a given criterion c
- b : value of base b for the same criterion c
- q : indifference threshold of b for criterion c
- p : preference threshold of b for criterion c
Partial concordance between a and b, con_ab, is:
- = 1, if a >= b - q
- = 0, if a < b - p
- = (a - b + p) / (p - q), otherwise
Partial concordance between b and a, con_ab, is:
- = 1, if b >= a - q
- = 0, if b < a - p
- = (b - a + p) / (p - q), otherwise
Example
-------
For `data_file.csv`:
+---+----+-----+----+
| | | c1 | c2 |
+===+====+=====+====+
| A | a1 | 8.5 | 18 |
+---+----+-----+----+
| A | a2 | 14 | 16 |
+---+----+-----+----+
| A | a3 | 5 | 25 |
+---+----+-----+----+
| B | b1 | 10 | 15 |
+---+----+-----+----+
| B | b2 | 15 | 20 |
+---+----+-----+----+
| T | q | 1 | 2 |
+---+----+-----+----+
| T | p | 2 | 4 |
+---+----+-----+----+
| T | v | 4 | 8 |
+---+----+-----+----+
| w | | 0.7 | 0.3|
+---+----+-----+----+
.. code-block:: none
type, profile, c1, c2
A, a1, 8.5, 18
A, a2, 14, 16
A, a3, 5, 27
B, b1, 10, 15
B, b2, 15, 20
T, q, 1, 2
T, p, 2, 4
T, v, 4, 8
w, , 0.7, 0.3
Partial discordance between alternatives and base profiles, `dis_ab`:
+----------+------+-----+-----+-----+
| | | a1 | a2 | a3 |
+==========+======+=====+=====+=====+
| criteria | base | | | |
+----------+------+-----+-----+-----+
| c1 | b1 | 0 | 0 | 1 |
+----------+------+-----+-----+-----+
| | b2 | 1 | 0 | 1 |
+----------+------+-----+-----+-----+
| c2 | b1 | 0 | 0 | 0 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 0 | 0 |
+----------+------+-----+-----+-----+
Consider column a1:
dis_ab[(c1, b1), a1] = 0 means that it not true that a1 is worse than b1
on criterion c1, i.e., on c1, a1 is above preference
boundary of b1, b1 - p.
con_ab[(c1, b2), a1] = 1 means that it true that a1 is worse that b2
on criterion c1, i.e., on c1, a1 is bellow indifference
boundary of b1, b1 - v
Partial discordance between base profiles and alternatives, `dis_ba`:
+----------+------+-----+-----+-----+
| | | a1 | a2 | a3 |
+==========+======+=====+=====+=====+
| criteria | base | | | |
+----------+------+-----+-----+-----+
| c1 | b1 | 0 | 1 | 0 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 0 | 0 |
+----------+------+-----+-----+-----+
| c2 | b1 | 0 | 0 | 1 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 0 | 0.75|
+----------+------+-----+-----+-----+
Consider row (c2, b2):
dis_ba[(c2, b2), a3] = 0 means that it not true that b2 is worse than a3
on criterion c2, i.e., on c2, b2 is above preference
boundary of a3, a3 - p.
dis_ba[(c2, b2), a3] = 0.25 means that it is partly true that b2
is worse than a1 on criterion c2,
i.e., on c2, b2 is between preferrence (a1 - p ) and
veto (a1 - v) boundaries of a1.
"""
# Initialize the result DataFrame
index = pd.MultiIndex.from_product([A.columns, B.index],
names=['criteria', 'base'])
dis_ab = pd.DataFrame(index=index, columns=A.index)
dis_ba = pd.DataFrame(index=index, columns=A.index)
for criterion in A.columns:
for base in B.index:
for alternative in A.index:
a = A.loc[alternative, criterion]
b = B.loc[base, criterion]
p = T.loc['p', criterion]
v = T.loc['v', criterion]
# Calculate d_j(a_i, b_k)
# partial discordance (a_i, b_k) for c_j
if a >= b - p:
dis = 0
elif a < b - v:
dis = 1
else:
dis = (p - b + a) / (p - v)
dis_ab.loc[(criterion, base), alternative] = dis
# partial concordance (b_k, a_i) for c_j
if b >= a - p:
dis = 0
elif b <= a - v:
dis = 1
else:
dis = (p - a + b) / (p - v)
dis_ba.loc[(criterion, base), alternative] = dis
# Replace NaN with 0 and round to 3 decimal places
dis_ab = dis_ab.fillna(0).round(3)
dis_ba = dis_ba.fillna(0).round(3)
return dis_ab, dis_ba
def global_concordance(c, w):
"""Global concordance between profiles `a` and `b`.
Truth value (berween 0 and 1) of the statement:
*a outranks b globlly, i.e. for all criteria*
where "outranks" means "is at least as good as".
Args:
c (DataFrame): Partial (or local) concordance between
two profiles (a, b) for all criteria.
w (Series): Weights of criteria.
Returns:
C (DataFrame): Global concordance between two profiles.
The global concordance index is the sum of weighted partial concordances
divided by the sum of the weights.
Example
-------
Given partial concordance between alternatives and base profiles, `c_ab`:
+----------+------+-----+-----+-----+
| | | a1 | a2 | a3 |
+==========+======+=====+=====+=====+
| criteria | base | | | |
+----------+------+-----+-----+-----+
| c1 | b1 | 0.5 | 1 | 0 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 1 | 0 |
+----------+------+-----+-----+-----+
| c2 | b1 | 1 | 1 | 1 |
+----------+------+-----+-----+-----+
| | b2 | 1 | 0 | 1 |
+----------+------+-----+-----+-----+
and the weights w:
+---+----+-----+----+
| | | c1 | c2 |
+===+====+=====+====+
| w | | 0.7 | 0.3|
+---+----+-----+----+
the global concordance C_ab is:
+---------+------+------+------+
| | a1 | a2 | a3 |
+=========+======+======+======+
| base | | | |
+---------+------+------+------+
| b1 | 0.65 | 1.0 | 0.3 |
+---------+------+------+------+
| b2 | 0.3 | 0.7 | 0.3 |
+---------+------+------+------+
C[b1, a1] = (0.7 * 0.5 + 0.3 * 1) = 0.65
"""
# Normalize weights
w_normalized = w / w.sum()
# Get unique base values
base_values = c.index.get_level_values('base').unique()
# Initialize the result DataFrame
C = pd.DataFrame(index=base_values, columns=c.columns)
for base in base_values:
# Step 1: Calculate weighted concordance for each base
c_base = c.xs(base, level='base')
weighted_c = c_base.mul(w_normalized, axis=0)
# Step 2: Sum the columns to get global concordance
global_c = weighted_c.sum(axis=0)
# Assign the results to the corresponding row in C
C.loc[base] = global_c
return C
def credibility_index(C, d):
"""Credibility of the assertion "a outranks b".
Credibility index σ(a,b) corresponds to the concordance
index C(a,b) weakened by discorcdances d(a, b):
When no criterion shows strong opposition (discordance) to the
outranking relation, the credibility index is equal to the global
concordance.
When a discordant criterion opposes a veto to the assertion
”a outranks b" (i.e. discordance is 1), then credibility index σ(a,b)
becomes null (the assertion ”a outranks b" is not credible at all).
When one or more criteria strongly oppose the outranking relation
(i.e., their discordance exceeds the global concordance),
the credibility index is reduced by multiplying the global concordance
by a factor derived from the discordances that exceed the global
concordance.
The formula for this correction involves a product of terms,
each representing the effect of a discordant criterion.
This approach ensures that strong opposition on even a single criterion
can significantly reduce the credibility of the outranking relation,
reflecting the non-compensatory nature of ELECTRE methods.
The credibility index provides a nuanced measure of
the strength of the outranking relation by taking into account
performance on both supporting and opposing criteria.
Args:
C (DataFrame): Global concordance.
d (DataFrame): Discordance.
Returns:
sigma (DataFrame): Credibility index.
Example
-------
Global concordance `C_ba`:
+---------+------+------+------+
| | a1 | a2 | a3 |
+=========+======+======+======+
| base | | | |
+---------+------+------+------+
| b1 | 0.85 | 0.3 | 0.7 |
+---------+------+------+------+
| b2 | 1.0 | 1.0 | 0.7 |
+---------+------+------+------+
Discordance between base profiles and alternatives, `d_ba`:
+----------+------+-----+-----+-----+
| | | a1 | a2 | a3 |
+==========+======+=====+=====+=====+
| criteria | base | | | |
+----------+------+-----+-----+-----+
| c1 | b1 | 0 | 1 | 0 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 0 | 0 |
+----------+------+-----+-----+-----+
| c2 | b1 | 0 | 0 | 1 |
+----------+------+-----+-----+-----+
| | b2 | 0 | 0 | 0.75|
+----------+------+-----+-----+-----+
Obtained credibility index `sigma_ba`:
+---------+------+------+------+
| | a1 | a2 | a3 |
+=========+======+======+======+
| base | | | |
+---------+------+------+------+
| b1 | 0.85 | 0 | 0 |
+---------+------+------+------+
| b2 | 1 | 1 | 0.583|
+---------+------+------+------+
sigma_ba[b1, a1] = C_ba[b1, a1] because d_ba[b1, a1] = 0.
sigma_ba[b1, a2] = C_ba[b1, a2] because d_ba[b1, a1] = 1.
sigma_ba[b2, a3] =
C_ba[b2, a3] * (1 - d_ba[(c2, b2), a3]) / (1 - C_ba[b2, a3]) =
0.7 * (1 - 0.75)/(1 - 0.7) = 0.583
because d_ba[(c2, b2), a3] = 0.8 > C_ba[b2, a3] = 0.7
and the other dscordances are 0 or smaller than the global concordances.
"""
# Initialize the result DataFrame with the same structure as C
sigma = pd.DataFrame(index=C.index, columns=C.columns)
for base in C.index:
for alternative in C.columns:
C_value = C.loc[base, alternative]
# Get discordance values for this base and alternative
d_values = d.loc[(slice(None), base), alternative]
# Identify criteria where discordance exceeds global concordance
F = d_values[d_values > C_value].index.get_level_values('criteria')
if len(F) == 0:
# If no discordance exceeds global concordance,
# credibility equals global concordance
sigma.loc[base, alternative] = C_value
else:
# Calculate the product term for discordance that exceeds
# the gobal concordance
product_term = np.prod(
[(1 - d.loc[(criterion, base), alternative]
) / (1 - C_value)
for criterion in F])
# Calculate credibility index as corrected concordance
sigma.loc[base, alternative] = C_value * product_term
sigma = sigma.round(4) # Round to 4 decimal places for readability
return sigma
def outrank(sigma_ab, sigma_ba, credibility_threshold):
"""Preference relation between alternatives and base profiles.
Four outranking (preference) relations are defined:
- σ(a,b) < λ and σ(b,a) ≥ λ ⇒ a I b, "a is indifferent to b"
- σ(a,b) ≥ λ and σ(b,a) ≥ λ ⇒ a ≻ b, "a is preferred to b"
- σ(a,b) < λ and σ(b,a) < λ ⇒ a ≺ b, "a is not preferred to b"
- σ(a,b) ≥ λ and σ(b,a) < λ ⇒ a R b, "a is incomparable with b"
where λ is a cutting level, i.e. the smallest value of the
credibility index compatible with the
assertion ”a outranks b", i.e., σ(a, b) ≥ λ ⇒ a > b.
a ≻ b, "a outranks b", i.e. "a is as least as good as b", means that a is
preferred to b.
a ≺ b, "a is not preferred to b", i.e. "a is not as least as good as b",
means that a is not preferred to b.
a I b, "a is indifferent to b" means that the performances of
the alternative a and of the base profile b are considered equivalent
or close enough that no clear preference can be established between them.
a R b, "a is incomparable with b" means that there is not enough evidence
to estabish preference or indifference between profiles. This is typically
the case when an alternative a outranks the base profile b on some criteria
and the base profile b outranks the alternative a on other criteria.
Args:
sigma_ab (DataFrame): Credibility index that alternative a outranks
base profile b.
sigma_ba (DataFrame): Credibility index that base profile b ouranks
alternative a.
credibility_threshold (float): Credibility threshold is a
minimum degree of credibility index that is considered necessary
to validate the statement "alternative a outranks base profile b".
It takes a value within the range [0.5, 1], typically 0.75.
Returns:
outranking (DataFrame): Preference relations: ≻, ≺, I, R between
base profiles (rows) and alternatives (columns).
Example
-------
credibility index `sigma_ab`: