picture
  • Research
  • Publications
  • Group
  • Software
  • Muhan Zhang (张牧涵)

    Email: muhan "at" pku "dot" edu "dot" cn
    Google Scholar, Github, Lab github


    Biography

    Muhan is a tenure-track assistant professor and PhD advisor in the Institute for Artificial Intelligence of Peking University . Before coming back to China, he was a research scientist in Facebook AI (now Meta AI) working on large-scale graph learning systems and problems (2019-2021). He received his PhD degree in computer science from Washington University in St. Louis (2015-2019), advised by Prof. Yixin Chen. Before WashU, he obtained a bachelor degree from Shanghai Jiao Tong University as a member of the IEEE pilot class, where he worked with Prof. Ya Zhang and Prof. Wenjun Zhang.

    张牧涵博士,北京大学人工智能研究院助理教授、研究员、博士生导师、院长助理。首届国家级青年人才项目(海外)获得者,北京大学博雅青年学者、未名青年学者。主持多项国自然、科技部、地方政府项目和课题,包括科技创新2030重大项目课题、国自然面上项目等,内容涵盖图神经网络、电路设计、药物分子设计、通用视觉、知识图谱、智慧司法等。研究受到华为、阿里、蚂蚁、百度、理想、灵均等企业资助,涵盖大语言模型高效微调、推理、加速、图基础模型等。常年担任NeurIPS、ICML、ICLR、CVPR等顶级学术会议的领域主席。主要研究方向包括图机器学习、大语言模型推理和高效微调技术、智慧司法等,在国际上处于领先地位。Google Scholar总引用量超过7800次,其中两篇一作文章引用量分别达到2200+和1800+次,入选Elsevier全球前2%顶尖科学家。担任北大通用人工智能实验班(通班)23级班主任,主讲北京大学《人工智能引论》(本科)和《机器学习》(本研)课程。张牧涵博士于2015年本科毕业于上海交通大学IEEE试点班,2019年获得美国华盛顿大学(圣路易斯)计算机科学博士学位,曾于2019至2021年担任Facebook AI(现Meta AI)研究科学家,负责十亿用户级别大规模图机器学习系统的开发和研究。

    Research Interests

    Graph machine learning. Many real-world problems are inherently graph-structured, e.g., social networks, biological networks, World Wide Web, molecules, circuits, brain, road networks, knowledge graphs. Many machine learning algorithms are also defined on graphs, such as neural networks and graphical models. In this field, I develop algorithms and theories for learning over graphs, and apply them to problems like link prediction, graph classification, graph structure optimization, and knowledge graph reasoning. I am also interested in practical applications of graph neural networks, including brain modeling, drug discovery, circuit design, and healthcare applications.

    Large language models reasoning. Compared to machine, human posesses extreme flexibility in handling unseen tasks in a few-shot/zero-shot way, much of which is attributed to human's system-II intelligence for complex logical reasoning, task planning, causal reasoning, and inductive generalization. Recently, large language models (LLMs) show promising improvement on such abilities, while also demonstrating insufficiency in some simplest cases and are ultimately still unreliable. For example, state-of-the-art LLMs cannot accurately add two large integers without resorting to external tools. In this field, we aim to understand and improve LLMs' reasoning ability, with means like graphs, code, rules, fine-tuning, etc. Ultimately, we want to build machines with human-like reasoning ability which are flexible, intepretable, and reliable, and enable machines to master top human general intelligence like induction, math, coding, and scientific discovery.


    Prospective Students

    I am looking for highly motivated PhD/undergraduate students who are interested in doing graph ML research with me. Please shoot me an email if you meet at least three of the following criteria: 1) creativity and passion for research, 2) solid math skills, 3) solid coding skills, 4) good English (writing and speaking). I will do my best to provide support for your success, including detailed guidance, plenty of computation resources, and research freedom for senior PhDs. You are especially welcome if you have interdisciplinary backgrounds (such as maths/physics/chemistry/biology) while proficient in coding. For students in Peking University, you can schedule one on one chats with me at my office.

    Due to the large number of applicants, the competition is intense every year and I may not be able to respond to every email. Hope you understand!

    For potential PhD students: I am mainly affiliated with the Insitute for AI (人工智能研究院), which is based in the main campus (燕园) of PKU. Your office will also be there and you don't need to go to the Changping (昌平) campus.


    News

    11/7/2024: Why do SOTA LLMs tend to think 9.11 > 9.9? Do they really understand numbers? We open-sourced NUPA studying the Numerical Understanding and Processing Abilities of LLMs, which contains a benchmark of 4 numerical representations and 17 distinct tasks.

    9/27/2024: I wrote a draft on the lexical invariance problem on multisets and graphs, proving the necessary and sufficient conditions for a lexical invariant function on multisets and graphs, respectively.

    9/26/2024: Four papers accepted at NeurIPS-24! Congrats to Fanxu, Cai, Xiaojuan and Yanbo!

    7/12/2024: We released GOFA, the Generative One For All model towards tackling all tasks on all kinds of graphs.

    6/24/2024: We proposed an efficient neural common neighbor method for temporal graph link prediction, which achieves three new SoTA results on TGB.

    5/17/2024: 1 paper accepted at KDD-24! Congrats to Zuoyu!

    5/16/2024: 2 papers accepted at ACL-24! Congrats to Jiaqi and Xiaojuan!

    5/2/2024: 3 papers accepted at ICML-24! Congrats to Yi, Xiyuan and Yanbo!

    4/13/2023: Invited by Huawei to give a talk on PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models and Case-Based or Rule-Based: How Do Transformers Do the Math? ".

    4/3/2024: We proposed a parameter-efficient fine-tuning method called PiSSA, which surpasses the fine-tuning effectiveness of the widely used LoRA on mainstream datasets and models without additional cost. A free lunch for PEFT! See preprint at PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models .

    2/1/2024: Invited by Alibaba Cloud and Tongyi Lab to give a talk on "One for All: Towards Training One Graph Model for All Classification Tasks".

    1/17/2024: 5 papers accepted at ICLR-24! Congrats to Hao, Xiyuan, Yinan, Zehao, and Ling!

    10/17/2023: We introduce a theoretical framework that views complex answer generation in large language models as a hierarchical "template-content" structure, which can explain how LLMs decompose tasks and perform complex reasoning. See preprint at Explaining the Complex Task Reasoning of Large Language Models with Template-Content Structure .

    9/22/2023: 5 papers accepted at NeurIPS-23! Congrats to Lecheng, Jiarui, Zian, Junru and Cai!

    6/5/2023: We proposed (k,t)-FWL+, a subgraph GNN achieving new state-of-the-art results on ZINC. See preprint at Towards Arbitrarily Expressive GNNs in O(n2) Space by Rethinking Folklore Weisfeiler-Lehman .

    5/29/2023: We introduced code prompting, a neural-symbolic prompting method that uses code as intermediate steps to improve LLMs' reasoning ability. See preprint at Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models .

    5/24/2023: We systematically tested LLMs' inductive, deductive, and abductive reasoning abilities, and found that their performance dropped a lot when replacing natural language with symbols. See preprint at Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners .

    5/31/2023: Gave a talk on "Is Distance Matrix Enough for Geometric Deep Learning?" at Amazon Web Services AI Shanghai Lablet.

    5/8/2023: We constructed BREC: a new comprehensive dataset for evaluating GNN expressiveness. The arxiv link of our manuscript is https://arxiv.org/pdf/2304.07702.pdf. The dataset and evaluation code are at https://github.com/GraphPKU/BREC. We welcome suggestions, contributions or collaborations to improve BREC.

    4/25/2023: One paper accepted at ICML-23! From Relational Pooling to Subgraph GNNs. Congrats to Cai and Xiyuan!

    4/20/2023: We provided a complete theory of using graph neural networks (GNNs) for multi-node representation learning with labeling tricks, extending our original labeling trick paper. The arxiv link of our manuscript is https://arxiv.org/pdf/2304.10074.pdf

    4/18/2023: We proposed Neural Common Neighbor (NCN) for link prediction. The preprint can be found at Neural Common Neighbor with Completion for Link Prediction .

    1/21/2023: Two papers accepted at ICLR-23! I2-GNNs and Circuit Graph Neural Network. I2-GNNs is the first linear-time GNN model to count 6-cycles. Congrats to Yinan and Zehao!

    11/24/2022: Two papers accepted at LoG-22! Graph Neural Network with Local Frame and Subgraph-aware Weisfeiler-Lehman. Congrats to Xiyuan and Zhaohui!

    11/18/2022: Invited by BioMap to give a talk on 3DLinker!

    9/15/2022: Three papers accepted at NeurIPS-22! Rethinking KG evaluation, K-hop GNN, and Geodesic GNN. Rethinking KG evaluation is accepted as an oral presentation! Congrats to Haotong, Jiarui and Lecheng!

    7/26/2022: Gave a talk on "How Powerful are Spectral Graph Neural Networks" at "AI + Math" Colloquia of Shanghai Jiao Tong University! Video link (in English).

    7/9/2022: Gave a talk on "How Powerful are Spectral Graph Neural Networks" at LOGS China! Video link at Bilibili (in Chinese).

    5/16/2022: Three papers accepted at ICML-22! 3DLinker, JacobiConv, and PACE. 3DLinker is accepted as a long presentation (118/5630)! Congrats to Yinan, Xiyuan and Zehao!

    4/27/2022: Invited by Twitter (London and New York teams) to give a talk on Labeling Trick!

    4/15/2022: I will serve as an area chair for NeurIPS-22.

    4/9/2022: Invited by AI Time to give a talk of Nested GNNs and ShaDow in the AI2000 Young Scientist Track!

    3/6/2022: Elected to the 2022 AI 2000 Most Influential Scholars by AMiner. Ranked 66th globally in the AAAI/IJCAI category.

    2/17/2022: Invited by AI Time to give a talk of Labeling Trick!

    1/24/2022: Two papers accepted at ICLR-22! Subgraph Representation Learning GNN, Positional Encoding for more powerful GNN!

    1/10/2022: Our Book Graph Neural Networks: Foundations, Frontiers, and Applications is published! A free version is available on the GNN Book Website. Happy to contribute one chapter on GNN for link prediction!

    12/22/2021: I will serve as a meta-reviewer (area chair) for ICML-22.

    12/19/2021: Glad to organize a graph workshop at ICLR-22: PAIR2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data. Welcome to contribute!

    9/29/2021: Three papers accepted at NeurIPS! Labeling Trick, Nested GNN and ShaDow GNN!

    9/16/2021: Gave a presentation on Labeling Trick at 4Paradigm!

    8/19/2021: Gave a presentation on Labeling Trick at Remin University of China!

    3/13/2021: I am going to join Peking University as an assistant professor starting from May 2021. Welcome to apply for PKU!


    Publications

      2024

    • [NeurIPS-24] F. Meng, Z. Wang and M. Zhang, PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models , Advances in Neural Information Processing Systems (NeurIPS-24), spotlight presentation, 2024. (PDF)
    • [NeurIPS-24] C. Zhou, X. Wang and M. Zhang, Latent Graph Diffusion: A Unified Framework for Generation and Prediction on Graphs , Advances in Neural Information Processing Systems (NeurIPS-24), 2024. (PDF)
    • [NeurIPS-24-Benchmark] X. Tang, J. Li, Y. Liang, S. C. Zhu, M. Zhang# and Z. Zheng#, Mars: Situated Inductive Reasoning in an Open-World Environment , Advances in Neural Information Processing Systems, D&B Track (NeurIPS-24-Benchmark), 2024. (PDF)
    • [NeurIPS-24-Benchmark] 4DBInfer: A 4D Benchmarking Toolbox for Graph-Centric Predictive Modeling on Relational DBs , Advances in Neural Information Processing Systems, D&B Track (NeurIPS-24-Benchmark), 2024. (PDF)
    • [KDD-24] Z. Yan, J. Zhou, L. Gao#, Z. Tang and M. Zhang#, An Efficient Subgraph GNN with Provable Substructure Counting Power , Proc. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-24), 2024. (PDF)
    • [ACL-24] J. Li, M. Wang, Z. Zheng and M. Zhang, LooGLE: Can Long-Context Language Models Understand Long Contexts?, The 62nd Annual Meeting of the Association for Computational Linguistics (ACL-24), 2024. (PDF) (Source code)
    • [ACL-24] X. Tang, S. C. Zhu, Y. Liang and M. Zhang, RulE: Knowledge Graph Reasoning with Rule Embedding, Findings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL-24), 2024. (PDF) (Source code)
    • [TNNLS-24] Z. Liu, F. Ji, J. Yang, X. Cao, M. Zhang, H. Chen and Y. Chang, Refining Euclidean Obfuscatory Nodes Helps: A Joint-Space Graph Learning Method for Graph Neural Networks,  IEEE Transactions on Neural Networks and Learning Systems (TNNLS) , 2024.
    • [iScience-24] X. Meng, X. Yan, K. Zhang, D. Liu, X. Cui, Y. Yang, M. Zhang, ..., and Y. Tang, The application of large language models in medicine: A scoping review , iScience, Volumn 27, Issue 5, May 2024. (PDF)
    • [JACT-24] C. Wan, M. Zhang, P. Dang, W. Hao, S. Cao, P. Li and C. Zhang Ambiguities in neural-network-based hyperedge prediction, Journal of Applied and Computational Topology, May 2024. (PDF) (Source code)
    • [ICML-24] Y. Hu, X. Tang, H. Yang and M. Zhang, Case-Based or Rule-Based: How Do Transformers Do the Math?, Proc. International Conference on Machine Learning (ICML-24), 2024. (PDF) (Source code)
    • [ICML-24] X. Wang, P. Li and M. Zhang, Graph As Point Set, Proc. International Conference on Machine Learning (ICML-24), 2024. (PDF)
    • [ICML-24] Y. Wang and M. Zhang, An Empirical Study of Realized GNN Expressiveness, Proc. International Conference on Machine Learning (ICML-24), 2024. (PDF) (Source code)
    • [WWW-24] H. Liu, J. Feng, L. Kong, D. Tao, Y. Chen and M. Zhang, Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method for Few-shot Node Tasks, The Web Conference (WWW-24), 2024. (PDF) (Source code)
    • [ICLR-24] H. Liu, J. Feng, L. Kong, N. Liang, D. Tao, Y. Chen and M. Zhang, One For All: Towards Training One Graph Model For All Classification Tasks, International Conference on Learning Representations (ICLR-24), spotlight presentation, (4.96% acceptance rate), 2024. (PDF) (Source code)
    • [ICLR-24] X. Wang, H. Yang and M. Zhang, Neural Common Neighbor with Completion for Link Prediction, International Conference on Learning Representations (ICLR-24), 2024. (PDF) (Source code)
    • [ICLR-24] Y. Huang, W. Lu, J. Robinson, Y. Yang, M. Zhang, S. Jegelka and P. Li, On the Stability of Expressive Positional Encodings for Graph Neural Networks, International Conference on Learning Representations (ICLR-24), 2024. (PDF) (Source code)
    • [ICLR-24] Z. Dong, M. Zhang, P. Payne, M. Province, C. Cruchaga, T. Zhao, F. Li, Y. Chen, Rethinking the Power of Graph Canonization in Graph Representation Learning with Stability, International Conference on Learning Representations (ICLR-24), 2024. (PDF)
    • [ICLR-24] L. Yang, Y. Tian, M. Xu, Z. Liu, S. Hong, W. Qu, W. Zhang, B. Cui, M. Zhang# and J. Leskovec, VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs, International Conference on Learning Representations (ICLR-24), 2024. (PDF) (Source code)
    • 2023

    • [NeurIPS-23] J. Zhou, J. Feng, X. Wang and M. Zhang, Distance-Restricted Folklore Weisfeiler-Lehman GNNs with Provable Cycle Counting Power, Advances in Neural Information Processing Systems (NeurIPS-23), spotlight presentation (3.06% acceptance rate), 2023. (PDF)
    • [NeurIPS-23] Z. Li, X. Wang, Y. Huang and M. Zhang, Is Distance Matrix Enough for Geometric Deep Learning?, Advances in Neural Information Processing Systems (NeurIPS-23), 2023. (PDF)
    • [NeurIPS-23] L. Kong, J. Feng, H. Liu, D. Tao, Y. Chen and M. Zhang, MAG-GNN: Reinforcement Learning Boosted Graph Neural Network, Advances in Neural Information Processing Systems (NeurIPS-23), 2023. (PDF)
    • [NeurIPS-23] J. Feng, L. Kong, H. Liu, D. Tao, F. Li, M. Zhang and Y. Chen, Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman, Advances in Neural Information Processing Systems (NeurIPS-23), 2023. (PDF)
    • [NeurIPS-23] C. Zhou, X. Wang and M. Zhang, Facilitating Graph Neural Networks with Random Walk on Simplicial Complexes, Advances in Neural Information Processing Systems (NeurIPS-23), 2023. (PDF)
    • [VLDB-23] H. Yin, M. Zhang, J. Wang and P. Li, SUREL+: Moving from Walks to Sets for Scalable Subgraph-based Graph Representation Learning, Proc. of the VLDB Endowment (VLDB-23), volume 16, 2023. (PDF) (Source code)
    • [ICML-23] C. Zhou, X. Wang and M. Zhang, From Relational Pooling to Subgraph GNNs: A Universal Framework for More Expressive Graph Neural Networks, Proc. International Conference on Machine Learning (ICML-23), 2023. (PDF)
    • [ICLR-23] Y. Huang, X. Peng, J. Ma, M. Zhang, Boosting the Cycle Counting Power of Graph Neural Networks with I2-GNNs, International Conference on Learning Representations (ICLR-23), 2023. (PDF) (Source code)
    • [ICLR-23] Z. Dong, W. Cao, M. Zhang, D. Tao, Y. Chen, X. Zhang, CktGNN: Circuit Graph Neural Network for Electronic Design Automation, International Conference on Learning Representations (ICLR-23), 2023. (PDF)
    • [ASP-DAC-23] Y. Chen, J. Mai, X. Gao, M. Zhang and Y. Lin, MacroRank: Ranking Macro Placement Solutions Leveraging Translation Equivariancy, 28th Asia and South Pacific Design Automation Conference (ASP-DAC-23), 2023. (PDF)
    • 2022

    • [LoG-22] X. Wang and M. Zhang, Graph Neural Network with Local Frame for Molecular Potential Energy Surface, The First Learning on Graphs Conference (LoG-22), 2022. (PDF)(Source code)
    • [LoG-22] Z. Wang, Q. Cao, H. Shen#, B. Xu, M. Zhang# and X. Cheng, Towards Efficient and Expressive GNNs for Graph Classification via Subgraph-aware Weisfeiler-Lehman, The First Learning on Graphs Conference (LoG-22), 2022. (#corresponding author)(PDF)
    • [NeurIPS-22] H. Yang, Z. Lin and M. Zhang, Rethinking Knowledge Graph Evaluation Under the Open-World Assumption, Advances in Neural Information Processing Systems (NeurIPS-22), oral presentation (1.7% acceptance rate), 2022. (PDF)(Source code)
    • [NeurIPS-22] J. Feng, Y. Chen, F. Li, A. Sarkar and M. Zhang, How Powerful are K-hop Message Passing Graph Neural Networks, Advances in Neural Information Processing Systems (NeurIPS-22), 2022. (PDF)(Source code)
    • [NeurIPS-22] L. Kong, Y. Chen, M. Zhang, Geodesic Graph Neural Network for Efficient Graph Representation Learning, Advances in Neural Information Processing Systems (NeurIPS-22), 2022. (PDF)(Source code)
    • [ICML-22-DyNN] Y. Yang, Y. Liang and M. Zhang, PA-GNN: Parameter-Adaptive Graph Neural Networks , Workshop on Dynamic Neural Networks in International Conference on Machine Learning (ICML-22-DyNN-Workshop), oral presentation, 2022. (PDF)
    • [VLDB-22] H. Yin, M. Zhang, Y. Wang, J. Wang and P. Li, Algorithm and System Co-design for Efficient Subgraph-based Graph Representation Learning , Proc. of the VLDB Endowment (VLDB-22), volume 15, 2022. (PDF)(Source code)
    • [ICML-22] Y. Huang, X. Peng, J. Ma, and M. Zhang, 3DLinker: An E(3) Equivariant Variational Autoencoder for Molecular Linker Design, Proc. International Conference on Machine Learning (ICML-22), long presentation, 2022. (Only 118 out of 5630 submissions are accepted as long presentations) (PDF)(Source code)
    • [ICML-22] X. Wang and M. Zhang, How Powerful are Spectral Graph Neural Networks, Proc. International Conference on Machine Learning (ICML-22), 2022. (PDF)(Source code)
    • [ICML-22] Z. Dong*, M. Zhang*, F. Li, and Y. Chen, PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs, Proc. International Conference on Machine Learning (ICML-22), 2022. (*co-first author) (PDF)(Source code)
    • [ICLR-22] X. Wang and M. Zhang, GLASS: GNN with Labeling Tricks for Subgraph Representation Learning, International Conference on Learning Representations (ICLR-22), 2022. (PDF)(Source code)
    • [ICLR-22] H. Wang, H. Yin, M. Zhang, and P. Li, Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks, International Conference on Learning Representations (ICLR-22), 2022. (PDF)(Source code)
    • 2021

    • [MIA-21] X. Li, Y. Zhou, N. Dvornek, M. Zhang, S. Gao, J. Zhuang, D. Scheinost, LH Staib, P. Ventola and JS Duncan, Braingnn: Interpretable brain graph neural network for fmri analysis, Medical Image Analysis (MIA-21), 2021. (PDF)
    • [NeurIPS-21] M. Zhang, P. Li, Y. Xia, K. Wang, and L. Jin, Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning, Advances in Neural Information Processing Systems (NeurIPS-21), 2021. (PDF)(Source code)
    • [NeurIPS-21] M. Zhang and P. Li, Nested Graph Neural Networks, Advances in Neural Information Processing Systems (NeurIPS-21), 2021. (PDF) (Source code)
    • [NeurIPS-21] H. Zeng, M. Zhang#, Y. Xia, A. Srivastava, A. Malevich, R. Kannan, V. Prasanna, L. Jin, and R. Chen, Decoupling the Depth and Scope of Graph Neural Networks, Advances in Neural Information Processing Systems (NeurIPS-21), 2021. (#corresponding author) (PDF)(Source code)
    • 2020

    • [ICLR-20] M. Zhang and Y. Chen, Inductive Matrix Completion Based on Graph Neural Networks, International Conference on Learning Representations (ICLR-20), spotlight presentation (4.16% acceptance rate), 2020. (PDF)(Source code)
    • [KDD-20] M. Zhang, C. King, M. Avidan, and Y. Chen, Hierarchical Attention Propagation for Healthcare Representation Learning, Proc. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-20), 2020. (PDF)(Source code)
    • [MICCAI-20] X. Li, Y. Zhou, N. Dvornek, M. Zhang, J. Zhuang, P. Ventola, and J. Duncan, Pooling Regularized Graph Neural Network for fMRI Biomarker Analysis, Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention. (MICCAI-20), 2020. (PDF)
    • 2019

    • [NeurIPS-19] M. Zhang, S. Jiang, Z. Cui, R. Garnett, and Y. Chen, D-VAE: A Variational Autoencoder for Directed Acyclic Graphs, Advances in Neural Information Processing Systems (NeurIPS-19), 2019. (PDF)(Source code)
    • [BJA-19] B. Fritz, Z. Cui, M. Zhang, Y. He, Y. Chen, A. Kronzer, A. Ben Abdallah, C. King, M. Avidan, A Deep Learning Model for Predicting 30-day Postoperative Mortalit, British Journal of Anaesthesia (BJA-19), 2019. (PDF)
    • 2018

    • [NeurIPS-18] M. Zhang and Y. Chen, Link Prediction Based on Graph Neural Networks, Advances in Neural Information Processing Systems (NeurIPS-18), spotlight presentation, 2018. (Only 168 out of 4856 submissions are accepted as spotlight presentations) (PDF)(Source code)(Website)(Video)
    • [AAAI-18] M. Zhang, Z. Cui, M. Neumann, and Y. Chen, An End-to-End Deep Learning Architecture for Graph Classification, Proc. AAAI Conference on Artificial Intelligence (AAAI-18), 2018. (PDF)(Supplement)(Source code)
    • [AAAI-18] M. Zhang, Z. Cui, S. Jiang, and Y. Chen, Beyond Link Prediction: Predicting Hyperlinks in Adjacency Space, Proc. AAAI Conference on Artificial Intelligence (AAAI-18), 2018. (PDF)(Source code)
    • [ICBK-18] Z. Cui, M. Zhang, and Y. Chen, Deep Embedding Logistic Regression, Proc. IEEE International Conference on Big Knowledge (ICBK-18), 2018. (PDF)
    • 2017

    • [KDD-17] M. Zhang and Y. Chen, Weisfeiler-Lehman Neural Machine for Link Prediction, Proc. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-17), oral presentation, 2017. (Only 64 out of 748 submissions are accepted as oral presentations) (PDF)(Video)(Slides)(Source code)
    • [Bioinformatics-17] T. Oyetunde*, M. Zhang*, Y. Chen, Y. Tang, and C. Lo, BoostGAPFILL: Improving the fidelity of metabolic network reconstructions through integrated constraint and pattern-based methods,  Bioinformatics, 33(4):608-611, 2017. (*co-first author) (PDF)
    • 2016

    • [BMC Bioinformatics-16] L. He, S. Wu, M. Zhang, Y. Chen, and Y. Tang, WUFlux: an open-source platform for 13C metabolic flux analysis of bacterial metabolism,  BMC Bioinformatics, 17.1 (2016): 444. (PDF)
    • [TNNLS-16] W. Cai, M. Zhang, and Y. Zhang, Batch Mode Active Learning for Regression With Expected Model Change,  IEEE Transactions on Neural Networks and Learning Systems (TNNLS) , (2016): 1-14. (PDF)

      2015

    • [IRJ-15] W. Cai, M. Zhang, and Y. Zhang, Active learning for ranking with sample density,  Information Retrieval Journal , 18.2 (2015): 123-144. (PDF)
    • [TWEB-15] W. Cai, M. Zhang, and Y. Zhang, Active learning for Web search ranking via noise injection,  ACM Transactions on the Web (TWEB), 9.1 (2015): 3. (PDF)

    Group

    Haotong Yang, PhD Student, PKU 2021 (co-advised with Zhouchen Lin)

    Xiyuan Wang, PhD Student, PKU 2022

    Fanxu Meng, PhD Student, PKU 2022

    Xiaojuan Tang, PhD Student, PKU 2022 (co-advised with Song-Chun Zhu)

    Zian Li, PhD Student, PKU 2023

    Yanbo Wang, PhD Student, PKU 2023

    Lecheng Kong, PhD Student, WashU

    Jiarui Feng, PhD Student, WashU

    Zehao Dong, PhD Student, WashU

    Hao Liu, PhD Student, WashU

    Xingang Peng, PhD Student, PKU

    Zuoyu Yan, PhD Student, PKU

    Junru Zhou, Undergraduate, PKU

    Yuran Xiang, Undergraduate, PKU

    Cai Zhou, Undergraduate, THU

    Yi Hu, Undergraduate, PKU

    Alumni

    Yinan Huang, Research Intern, BIGAI

    Yang Hu, Research Intern, PKU

    Yuxin Yang, Research Intern, BIGAI

    Zhaohui Wang, PhD Student, ICT-CAS


    Software

    IGMC (Inductive Graph-based Matrix Completion)

    Code for paper "Inductive Matrix Completion Based on Graph Neural Networks"

    D-VAE (DAG Variational Autoencoder)

    Code for paper "D-VAE: A Variational Autoencoder for Directed Acyclic Graphs" on NeurIPS 2019

    SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction)

    Code for paper "Link Prediction Based on Graph Neural Networks" on NeurIPS 2018

    DGCNN (Deep-Graph-CNN)

    Code for paper "An End-to-End Deep Learning Architecture for Graph Classification" on AAAI 2018

    Hyperlink Prediction Toolbox

    Code for paper "Beyond Link Prediction: Predicting Hyperlinks in Adjacency Space" on AAAI 2018

    WLNM (Weisfeiler-Lehman Neural Machine)

    Code for paper "Weisfeiler-Lehman Neural Machine for Link Prediction" on KDD 2017