Skip to content

IsaH57/PrincipleVote

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PrincipleVote

This repository contains code for projects related to the analysis of voting patterns.

There are two main directories:

1. principle_vote

This directory contains code for analyzing voting patterns in different neural networks. More specifically, three different types of neural networks were built and trained to learn voting patterns from a dataset of voting profiles. Afterwards, the networks were evaluated on their ability to generalize to unseen voting profiles, correctly predict one or multiple winners, and adhere to voting axioms.

  • principle_vote/voting_cnn.py holds the implementation of a Convolutional Neural Network (CNN) to predict voting outcomes.

  • principle_vote/voting_mlp.py contains the implementation of a Multi-Layer Perceptron (MLP) for the same purpose.

  • principle_vote/voting_wec.py includes the implementation of a Classifier using an Embedding layer to learn a Word Embedding-like representation of voters.

The data used for training and evaluating these models consists of synthetically generated voting profiles, generated by principle_vote/synth_data.py. The profiles are created using the pref_voting library and follow their format.

Additionally, principle_vote/axioms.py contains implementations of various voting axioms the profiles should fulfill. The script holds functions to evaluate single profiles against these axioms, as well as implementations that can be used in model's loss functions to encourage adherence to these axioms during training.

The code is based on the paper "Learning How to Vote with Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks" by Levin Hornischer and Zoi Terzopoulou (2025).

2. ranking_models

This directory contains code for analyzing the abilities of voting models to find the image best matching a prompt, given this prompt and a list of images. It uses models and data from the HPSv2 package.

To run experiments, first install the HPSv2 package. The used HPSv2.0 model can be downloaded from here, and the images belonging to the test.json in this directory from here. Due to differences in folder structures, one might need to adjust the paths to the images in test.json after downloading.

Once you have downloaded both, you can run experiments using the scripts in this directory. The main scripts are:

  • ranking_models/run_model_scoring.py uses the model's built-in score function to score each image separately given a prompt. Outputs can be found in ranking_modes/ranking_results/model_scoring_results_raw_output.json.
  • ranking_models/run_model_ranking.py uses the model's built-in evaluate_rank function that automatically ranks the images based on their scores. Outputs can be found in ranking_modes/ranking_results/model_ranking_results_raw_output.json.

Using scripts from the utils directory, the raw outputs are transformed into formates s suitable for further analysis.

The "ground-truth" rankings by human annotators can be found in the ranking_models/test.json file. To convert them into voting profiles, the formats from the pref_voting library were used. The script ranking_models/convert_hpdv2_to_prefvoting.py was used to convert the original test.json into the pref_voting format. Winners according to three different scoring methods (Borda count, Copeland and Plurality) can be found in ranking_models/ranking_results/human_voting_analysis.json.

The ranking_models/results_comparison directory contains the results of comparing the model rankings to the human rankings, obtained by executing scripts from ranking_models/utils.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages