Skip to content
@OPTML-Group

OPTML Group

Welcome to the OPTML Group's GitHub Repository!

About Us

OPtimization and Trustworthy Machine Learning (OPTML) group (Group Website) is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!

Pinned Loading

  1. Unlearn-Saliency Unlearn-Saliency Public

    [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, D…

    Python 122 24

  2. UnlearnCanvas UnlearnCanvas Public

    [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng …

    Python 67 2

  3. Diffusion-MU-Attack Diffusion-MU-Attack Public

    The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and e…

    Python 76 3

  4. AdvUnlearn AdvUnlearn Public

    Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models". This work adversarially unlearns the text encoder to enh…

    Jupyter Notebook 41 1

  5. Unlearn-Sparse Unlearn-Sparse Public

    [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

    Python 69 10

  6. Unlearn-Simple Unlearn-Simple Public

    "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu

    Python 26 7

Repositories

Showing 10 of 30 repositories
  • OPTML-Group/OPTML-Group.github.io’s past year of commit activity
    SCSS 1 2 0 0 Updated May 31, 2025
  • Unlearn-ILU Public
    OPTML-Group/Unlearn-ILU’s past year of commit activity
    Python 1 MIT 0 0 0 Updated May 27, 2025
  • Unlearn-Saliency Public

    [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu

    OPTML-Group/Unlearn-Saliency’s past year of commit activity
    Python 122 MIT 24 4 0 Updated May 27, 2025
  • Unlearn-Simple Public

    "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu

    OPTML-Group/Unlearn-Simple’s past year of commit activity
    Python 26 MIT 7 1 0 Updated May 27, 2025
  • Unlearn-WorstCase Public

    [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu

    OPTML-Group/Unlearn-WorstCase’s past year of commit activity
    Python 21 MIT 2 1 0 Updated May 27, 2025
  • Unlearn-Smooth Public

    [ICML25] Official repo for "Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond"

    OPTML-Group/Unlearn-Smooth’s past year of commit activity
    Python 7 MIT 0 1 0 Updated May 27, 2025
  • VLM-Safety-MU Public
    OPTML-Group/VLM-Safety-MU’s past year of commit activity
    Python 3 MIT 0 0 0 Updated Apr 29, 2025
  • MU-Coreset Public
    OPTML-Group/MU-Coreset’s past year of commit activity
    Python 0 0 0 0 Updated Apr 22, 2025
  • Diffusion-MU-Attack Public

    The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.

    OPTML-Group/Diffusion-MU-Attack’s past year of commit activity
    Python 76 MIT 3 1 0 Updated Feb 28, 2025
  • WAGLE Public

    Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"

    OPTML-Group/WAGLE’s past year of commit activity
    Python 14 MIT 3 1 0 Updated Dec 16, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…