The project focuses on improving the Graph Con- volutional Networks (GCNs) performance. These networks became a quite common method for graph representation learning and are widely used in a variety of real-world applications. However, GCNs are typically shallow, with the number of layers not more than two. Greater depth usually leads to network overfitting, oversmoothing or even an inability to learn, where the application of standard methods like dropout and weight pe- nalizing does not help. In this work we explore certain techniques that allow the construction of deeper networks and preserve the ability of net- work to train effectively.
All experiments reproducible by the running notebooks, notebooks_feathernox and DropEdge/script.