I am an associate distinguished researcher at NTT. My research interests are machine learning with synthetic data, generative models, vision-language models, explainability, distribution shifts, self-supervised learning, and semi-supervised learning.

Updates

  • [2025/11/10] Our paper Distribution Highlighted Reference-based Label Distribution Learning for Facial Age Estimation has been accepted to WACV 2026!
  • [2025/11/9] Our paper Difference Vector Equalization for Robust Fine-tuning of Vision-Language Models has been accepted to AAAI 2026 (Acceptance Rate: 17.6%)! We proposed a robust fine-tuning method for CLIP-like models, which preserves geometric structures in feature spaces to maintain zero-shot performance.
  • [2025/9/24] My doctoral dissertation, Dataset Synthesis with Deep Generative Models, has been accepted, and I received a PhD in Informatics!
  • [2025/4/1] Our paper Evaluation of Time-Series Training Dataset through Lens of Spectrum of Deep State Space Models has been accepted to IJCNN 2025!
  • [2025/2/26] Our paper Post-pre-training for Modality Alignment in Vision-Language Foundation Models has been accepted to CVPR 2025! We propose a very lightweight post-pre-training method for aligning pre-trained vision-language models like CLIP.
  • [2025/01/22] Our paper Test-time Adaptation for Regression by Subspace Alignment has been accepted to ICLR 2025! This paper proposes a novel test-time adaptation method for regression tasks by aligning features on a subspace.
  • [2025/01/21] Our paper Transfer Learning with Pre-trained Conditional Generative Models has been accepted to Machine Learning Journal (ECML-PKDD Journal Track)! We propose a generative transfer learning method based on pre-trained large generative models for a severe setting where we cannot access source datasets and pre-trained weights for target tasks.
  • [2024/12/10] Our paper Explanation Bottleneck Models has been accepted to AAAI 2025 (acceptance rate: 23%)! We propose a new interpretable model that generates text explanations and predicts the final label based on the text explanation. This work will be presented at the Workshop on Foundation Model Intervention (MINT) @ NeurIPS2024!
Past Updates

Activities

Services as a Reviewer

  • 2022: ICML, NeurIPS
  • 2023: CVPR, PAKDD, ICML, ICCV, NeurIPS, IPSJ, DMLR@ICML2023, BMVC, ACML, TNNLS
  • 2024: WACV, ICLR, CVPR, DMLR@ICLR2024, ICML, ECCV, NeurIPS, NeurIPS DB Track, ACML, DMLR@ICML2024, TMLR
  • 2025: AAAI, ICLR, AISTATS, CVPR, ICML, TMLR, ICCV, NeurIPS, Pattern Recognition
  • 2026: WACV, AAAI, ICLR

Biography

Apr. 2017 - Current

Researcher at NTT

Apr. 2022 - Sep. 2025

Ph.D in Informatics from Dept. of Intelligence Science & Technology, Graduate School of Informatics, Kyoto University (Supervisor: Hisashi Kashima)

Apr. 2015 - Mar. 2017

M.E. from Dept. of Computer Engineering, Graduate School of Engineering, Yokohama National University (Supervisor: Kimio Kuramitsu)

Apr. 2011 - Mar. 2015

B.E. from Dept. of Computer Engineering, Yokohama National University (Supervisor: Kimio Kuramitsu)


Publications

International Conference

  1. S. Suzuki, S. Yamaguchi, S. Takeda, T. Yamane, N. Makishima, N. Kawata, M. Ihori, T. Tanaka, S. Orihashi, R. Masumura,
    Distribution Highlighted Reference-based Label Distribution Learning for Facial Age Estimation,
    The IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026.
  2. S. Suzuki, S. Yamaguchi, S. Takeda, T. Yamane, N. Makishima, N. Kawata, M. Ihori, T. Tanaka, S. Orihashi, R. Masumura,
    Difference Vector Equalization for Robust Fine-tuning of Vision-Language Models,
    AAAI Conference on Artificial Intelligence (AAAI), 2026.
  3. S. Kanai, Y. Ida, K. Adachi, M. Uchida, T. Yoshida, S. Yamaguchi,
    Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models,
    International Joint Conference on Neural Networks (IJCNN), 2025.
  4. S. Yamaguchi, F. Dewei, S. Kanai, K. Adachi, D. Chijiwa,
    Post-pre-training for Modality Alignment in Vision-Language Foundation Models,
    The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. [Code]
  5. K. Adachi, S. Yamaguchi, A. Kumagai,
    Test-time Adaptation for Regression by Subspace Alignment,
    International Conference on Learning Representations (ICLR), 2025. [OpenReview]
  6. S. Yamaguchi and K. Nishida,
    Explanation Bottleneck Models,
    AAAI Conference on Artificial Intelligence (AAAI), Oral, 2025. [Code]
  7. S. Yamaguchi,
    Analyzing Diffusion Models on Synthesizing Training Datasets,
    Asian Conference on Machine Learning (ACML), 2024. [OpenReview]
  8. M. Yamada, T. Yamashita, S. Yamaguchi, D. Chijiwa,
    Toward Data Efficient Model Merging between Different Datasets without Performance Degradation,
    Asian Conference on Machine Learning (ACML), 2024. [arXiv] [OpenReview]
  9. K. Adachi, S. Enomoto, T. Sasaki, S. Yamaguchi,
    Test-Time Similarity Modification for Person Re-Identification Toward Temporal Distribution Shift,
    International Joint Conference on Neural Networks (IJCNN), 2024.
  10. S. Enomoto, N. Hasegawa, K. Adachi, T. Sasaki, S. Yamaguchi, S. Suzuki, T. Eda,
    Test-Time Adaptation Meets Image Enhancement: Improving Accuracy via Uncertainty-Aware Logit Switching,
    International Joint Conference on Neural Networks (IJCNN), 2024.
  11. S. Yamaguchi, S. Kanai, K. Adachi, D. Chijiwa,
    Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks,
    The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. [Code] [arXiv]
  12. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima,
    Regularizing Neural Networks with Meta-Learning Generative Models,
    Neural Information Processing Systems (NeurIPS), 2023.
  13. S. Yamaguchi,
    Generative Semi-supervised Learning with Meta-Optimized Synthetic Samples,
    Asian Conference on Machine Learning (ACML), Best Paper Award, 2023.
  14. S. Suzuki, S. Yamaguchi, S. Takeda, S. Kanai, N. Makishima, A. Ando, R. Masumura,
    Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff,
    The IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
  15. K. Adachi, S. Yamaguchi, A. Kumagai,
    Covariance-aware Feature Alignment with Pre-computed Source Statistics for Test-time Adaptation,
    IEEE International Conference on Image Processing (ICIP), 2023.
  16. S. Kanai, S. Yamaguchi, M. Yamada, H. Takahashi, Y. Ida,
    Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness,
    International Conference on Machine Learning (ICML), 2023.
  17. D. Chijiwa, S. Yamaguchi, A. Kumagai, Y. Ida,
    Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks,
    Neural Information Processing Systems (NeurIPS), 2022.
  18. K. Adachi, S. Yamaguchi,
    Learning Robust Convolutional Neural Networks with Relevant Feature Focusing via Explanations,
    IEEE International Conference on Multimedia & Expo (ICME), 2022.
  19. D. Chijiwa, S. Yamaguchi, Y. Ida, K. Umakoshi, T. Inoue,
    Pruning Randomly Initialized Neural Networks with Iterative Randomization,
    Neural Information Processing Systems (NeurIPS, Spotlight), 2021. [arXiv] [Code]
  20. S. Yamaguchi, S. Kanai,
    F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain,
    International Conference on Computer Vision (ICCV), 2021. [arXiv]
  21. S. Yamaguchi, S. Kanai, T. Shioda, S. Takeda,
    Image Enhanced Rotation Prediction for Self-Supervised Learning,
    IEEE International Conference on Image Processing (ICIP), 2021. [arXiv]
  22. S. Kanai, M. Yamada, S. Yamaguchi, H. Takahashi, Y. Ida,
    Constraining Logits by Bounded Function for Adversarial Robustness,
    International Joint Conference on Neural Networks (IJCNN), 2021. [arXiv]
  23. S. Yamaguchi, S. Kanai, T. Eda,
    Effective Data Augmentation with Multi-Domain Learning GANs,
    AAAI Conference on Artificial Intelligence (AAAI), 2020. [arXiv]
  24. S. Yamaguchi, K. Kuramitsu,
    A Fusion Techniques of Schema and Syntax Rules for Validating Open Data,
    Asian Conference on Intelligent Information and Database Systems (ACIIDS), 2017

International Journal

  1. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima,
    Transfer Learning with Pre-trained Conditional Generative Models,
    Machine Learning Journal (ECML-PKDD Journal Track), 2025.
  2. T. Sasaki, A. S. Walmsley, S. Enomoto, K. Adachi, S. Yamaguchi
    Key Factors Determining the Required Number of Training Images in Person Re-Identification,
    IEEE Access.

International Workshop (Refereed)

  1. S. Yamaguchi and K. Nishida,
    Toward Explanation Bottleneck Models,
    NeurIPS Workshop on Foundation Model Interventions (MINT), 2024.
  2. K. Adachi, S. Yamaguchi, Atsutoshi Kumagai
    Test-time Adaptation for Regression by Subspace Alignment,
    The 1st Workshop on Test-Time Adaptation at CVPR 2024. Special Mentioned.
  3. S. Yamaguchi,
    Analyzing Diffusion Models on Synthesizing Training Datasets,
    Data-centric Machine Learning Workshop at ICLR 2024.
  4. S. Yamaguchi and T. Fukuda,
    On the Limitation of Diffusion Models for Synthesizing Training Datasets,
    SyntheticData4ML Workshop at NeurIPS 2023.
  5. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima,
    Regularizing Neural Networks with Meta-Learning Generative Models,
    Data-centric Machine Learning Research (DMLR) Workshop at ICML 2023.

Preprints

  1. S. Yamaguchi, K. Nishida, D. Chijiwa,
    Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought,
    arXiv, 2025.
  2. K. Adachi, S. Yamaguchi, T. Hamagami,
    Uniformity First: Uniformity-aware Test-time Adaptation of Vision-language Models against Image Corruption,
    arXiv, 2025. [Code]
  3. S. Yamaguchi, S. Kanai, T. Shioda, S. Takeda,
    Multiple pretext-task for self-supervised learning via mixing multiple image transformations,
    arXiv, 2019.
  4. K. Kuramitsu, S. Yamaguchi,
    XML Schema Validation using Parsing Expression Grammars,
    PeerJ PrePrints, 2015

Honors

  • Outstanding Reviewer: ICML 2022, NeurIPS 2024 Main Track, NeurIPS 2024 Dataset & Benchmark Track
  • 令和四年度 (2022) PRMU研究奨励賞 (outstanding research award at a Japanese domestic conference)
  • ACML2023 Best Paper Award