About Me

My research interests focus on AI for Complexity and AI by Complexity. Regarding AI for Complexity, I use machine learning methods to solve classic problems in complex systems, such as network reconstruction problems, network generation problems, and renormalization problems. In AI by Complexity, I try to use theoretical advances in complexity to drive the construction of the next generation of artificial intelligence frameworks.

 

2013-2017
Hangzhou Dianzi University - Undergraduate

2018-2019
Beijing Normal University - Research Assistant

2019-now
Beijing Normal University - Ph.D. Student

Selected Papers

Network Information Dynamics Renormalization Group
Zhang Zhang, , Arsham Ghavasieh, Jiang Zhang, Manlio De Domenico*

Information dynamics is vital for many complex systems with networked backbones, from cells to societies. Recent advances in statistical physics have enabled capturing the macroscopic network properties, like how diverse the flow pathways are and how fast the signals can transport, based on the network counterparts of entropy and free energy. However, given the computational challenge posed by the large number of components in real-world systems, there is a need for advanced network renormalization---ie, compression---methods providing simpler-to-read representations while preserving the flow of information between functional units across scales. We use graph neural networks to identify suitable groups of components for coarse-graining a network and achieve a low computational complexity suitable for practical application. Even for large compressions, our approach is highly effective in preserving the flow in synthetic and empirical networks, as demonstrated by theoretical analysis and numerical experiments. Remarkably, we find that the model works by merging nodes of similar ecological niches---ie, structural properties---, suggesting that they play redundant roles as senders or receivers of information. Our work offers a low-complexity renormalization method breaking the size barrier for meaningful compressions of extremely large networks, working as a multiscale topological lens in preserving the flow of information in biological, social, and technological systems better than existing alternatives mostly focused on structural properties of a network.

Download this paper here





Selected Papers

Neural Network Pruning by Gradient Descent
Zhang Zhang, Ruyi Tao, Jiang Zhang*

The rapid increase in the parameters of deep learning models has led to significant costs, challenging computational efficiency and model interpretability. In this paper, we introduce a novel and straightforward neural network pruning framework that incorporates the Gumbel-Softmax technique. This framework enables the simultaneous optimization of a network's weights and topology in an end-to-end process using stochastic gradient descent. Empirical results demonstrate its exceptional compression capability, maintaining high accuracy on the MNIST dataset with only 0.15\% of the original network parameters. Moreover, our framework enhances neural network interpretability, not only by allowing easy extraction of feature importance directly from the pruned network but also by enabling visualization of feature symmetry and the pathways of information propagation from features to outcomes. Although the pruning strategy is learned through deep learning, it is surprisingly intuitive and understandable, focusing on selecting key representative features and exploiting data patterns to achieve extreme sparse pruning. We believe our method opens a promising new avenue for deep learning pruning and the creation of interpretable machine learning systems.

Download this paper here

Selected Papers

Graph Completion Through Local Pattern Generalization
Zhang Zhang, Ruyi Tao, Yongzai Tao, Mingze Qi and Jiang Zhang*

Network completion is more challenging than link prediction, as it aims to infer both missing links and nodes. Although various methods exist for this problem, few utilize structural information-specifically, the similarity of local connection patterns. In this study, we introduce a model called C-GIN, which captures local structural patterns in the observed portions of a network using a Graph Auto-Encoder equipped with a Graph Isomorphism Network. This model generalizes these patterns to complete the entire graph. Experimental results on both synthetic and real-world networks across diverse domains indicate that C-GIN not only requires less information but also outperforms baseline prediction models in most cases. Additionally, we propose a metric known as “Reachable Clustering Coefficient (RCC)” based on network structure. Experiments reveal that C-GIN performs better on networks with higher Reachable CC values.

This paper was published in The 12th International Conference on Complex Networks and their Applications





Selected Papers

A General Deep Learning Framework for Network Reconstruction and Dynamics Learning(2019)
Zhang Zhang, Yi Zhao, Jing Liu, Shuo Wang, Ruyue Xin and Jiang Zhang*

In this work, we introduce Gumbel Graph Network (GGN), a model-free, data-driven deep learning framework to accomplish network reconstruction and dynamics simulation. Our method can reconstruct many kind of dynamics such as continuous, discrete, and even binary.

This paper was published in 《Applied Network Science》

<

Blogs

Introduction to several typical papers on Graph Pooling Methods