I am an associate distinguished researcher at NTT. My research interests are machine learning with synthetic data, generative models, vision-language models, explainability, distribution shifts, self-supervised learning, and semi-supervised learning.

Updates

Past Updates

Activities

Services as a Reviewer

  • 2022: ICML, NeurIPS
  • 2023: CVPR, PAKDD, ICML, ICCV, NeurIPS, IPSJ, DMLR@ICML2023, BMVC, ACML, TNNLS
  • 2024: WACV, ICLR, CVPR, DMLR@ICLR2024, ICML, ECCV, NeurIPS, NeurIPS DB Track, ACML, DMLR@ICML2024, TMLR
  • 2025: AAAI, ICLR, AISTATS, CVPR, ICML, TMLR, ICCV, NeurIPS, Pattern Recognition
  • 2026: WACV, AAAI, ICLR, CVPR, ICME, ICML

Biography

Apr. 2017 - Current

Researcher at NTT

Apr. 2022 - Sep. 2025

Ph.D in Informatics from Dept. of Intelligence Science & Technology, Graduate School of Informatics, Kyoto University (Supervisor: Hisashi Kashima)

Apr. 2015 - Mar. 2017

M.E. from Dept. of Computer Engineering, Graduate School of Engineering, Yokohama National University (Supervisor: Kimio Kuramitsu)

Apr. 2011 - Mar. 2015

B.E. from Dept. of Computer Engineering, Yokohama National University (Supervisor: Kimio Kuramitsu)


Publications

International Conference

  1. D. Chijiwa, T. Hasegawa, K. Nishida, S. Yamaguchi, T. Ohba, T. Sakao, S. Takeuchi,
    Lossless Vocabulary Reduction for Auto-Regressive Language Models,
    International Conference on Learning Representations (ICLR), 2026.
  2. S. Suzuki, S. Yamaguchi, S. Takeda, T. Yamane, N. Makishima, N. Kawata, M. Ihori, T. Tanaka, S. Orihashi, R. Masumura,
    Distribution Highlighted Reference-based Label Distribution Learning for Facial Age Estimation,
    The IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026.
  3. S. Suzuki, S. Yamaguchi, S. Takeda, T. Yamane, N. Makishima, N. Kawata, M. Ihori, T. Tanaka, S. Orihashi, R. Masumura,
    Difference Vector Equalization for Robust Fine-tuning of Vision-Language Models,
    AAAI Conference on Artificial Intelligence (AAAI), 2026.
  4. S. Kanai, Y. Ida, K. Adachi, M. Uchida, T. Yoshida, S. Yamaguchi,
    Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models,
    International Joint Conference on Neural Networks (IJCNN), 2025.
  5. S. Yamaguchi, F. Dewei, S. Kanai, K. Adachi, D. Chijiwa,
    Post-pre-training for Modality Alignment in Vision-Language Foundation Models,
    The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. [Code]
  6. K. Adachi, S. Yamaguchi, A. Kumagai,
    Test-time Adaptation for Regression by Subspace Alignment,
    International Conference on Learning Representations (ICLR), 2025. [OpenReview]
  7. S. Yamaguchi and K. Nishida,
    Explanation Bottleneck Models,
    AAAI Conference on Artificial Intelligence (AAAI), Oral, 2025. [Code]
  8. S. Yamaguchi,
    Analyzing Diffusion Models on Synthesizing Training Datasets,
    Asian Conference on Machine Learning (ACML), 2024. [OpenReview]
  9. M. Yamada, T. Yamashita, S. Yamaguchi, D. Chijiwa,
    Toward Data Efficient Model Merging between Different Datasets without Performance Degradation,
    Asian Conference on Machine Learning (ACML), 2024. [arXiv] [OpenReview]
  10. K. Adachi, S. Enomoto, T. Sasaki, S. Yamaguchi,
    Test-Time Similarity Modification for Person Re-Identification Toward Temporal Distribution Shift,
    International Joint Conference on Neural Networks (IJCNN), 2024.
  11. S. Enomoto, N. Hasegawa, K. Adachi, T. Sasaki, S. Yamaguchi, S. Suzuki, T. Eda,
    Test-Time Adaptation Meets Image Enhancement: Improving Accuracy via Uncertainty-Aware Logit Switching,
    International Joint Conference on Neural Networks (IJCNN), 2024.
  12. S. Yamaguchi, S. Kanai, K. Adachi, D. Chijiwa,
    Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks,
    The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. [Code] [arXiv]
  13. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima,
    Regularizing Neural Networks with Meta-Learning Generative Models,
    Neural Information Processing Systems (NeurIPS), 2023.
  14. S. Yamaguchi,
    Generative Semi-supervised Learning with Meta-Optimized Synthetic Samples,
    Asian Conference on Machine Learning (ACML), Best Paper Award, 2023.
  15. S. Suzuki, S. Yamaguchi, S. Takeda, S. Kanai, N. Makishima, A. Ando, R. Masumura,
    Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff,
    The IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
  16. K. Adachi, S. Yamaguchi, A. Kumagai,
    Covariance-aware Feature Alignment with Pre-computed Source Statistics for Test-time Adaptation,
    IEEE International Conference on Image Processing (ICIP), 2023.
  17. S. Kanai, S. Yamaguchi, M. Yamada, H. Takahashi, Y. Ida,
    Switching One-Versus-the-Rest Loss to Increase the Margin of Logits for Adversarial Robustness,
    International Conference on Machine Learning (ICML), 2023.
  18. D. Chijiwa, S. Yamaguchi, A. Kumagai, Y. Ida,
    Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks,
    Neural Information Processing Systems (NeurIPS), 2022.
  19. K. Adachi, S. Yamaguchi,
    Learning Robust Convolutional Neural Networks with Relevant Feature Focusing via Explanations,
    IEEE International Conference on Multimedia & Expo (ICME), 2022.
  20. D. Chijiwa, S. Yamaguchi, Y. Ida, K. Umakoshi, T. Inoue,
    Pruning Randomly Initialized Neural Networks with Iterative Randomization,
    Neural Information Processing Systems (NeurIPS, Spotlight), 2021. [arXiv] [Code]
  21. S. Yamaguchi, S. Kanai,
    F-Drop&Match: GANs with a Dead Zone in the High-Frequency Domain,
    International Conference on Computer Vision (ICCV), 2021. [arXiv]
  22. S. Yamaguchi, S. Kanai, T. Shioda, S. Takeda,
    Image Enhanced Rotation Prediction for Self-Supervised Learning,
    IEEE International Conference on Image Processing (ICIP), 2021. [arXiv]
  23. S. Kanai, M. Yamada, S. Yamaguchi, H. Takahashi, Y. Ida,
    Constraining Logits by Bounded Function for Adversarial Robustness,
    International Joint Conference on Neural Networks (IJCNN), 2021. [arXiv]
  24. S. Yamaguchi, S. Kanai, T. Eda,
    Effective Data Augmentation with Multi-Domain Learning GANs,
    AAAI Conference on Artificial Intelligence (AAAI), 2020. [arXiv]
  25. S. Yamaguchi, K. Kuramitsu,
    A Fusion Techniques of Schema and Syntax Rules for Validating Open Data,
    Asian Conference on Intelligent Information and Database Systems (ACIIDS), 2017

International Journal

  1. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima,
    Transfer Learning with Pre-trained Conditional Generative Models,
    Machine Learning Journal (ECML-PKDD Journal Track), 2025.
  2. T. Sasaki, A. S. Walmsley, S. Enomoto, K. Adachi, S. Yamaguchi
    Key Factors Determining the Required Number of Training Images in Person Re-Identification,
    IEEE Access.

International Workshop (Refereed)

  1. S. Yamaguchi and K. Nishida,
    Toward Explanation Bottleneck Models,
    NeurIPS Workshop on Foundation Model Interventions (MINT), 2024.
  2. K. Adachi, S. Yamaguchi, Atsutoshi Kumagai
    Test-time Adaptation for Regression by Subspace Alignment,
    The 1st Workshop on Test-Time Adaptation at CVPR 2024. Special Mentioned.
  3. S. Yamaguchi,
    Analyzing Diffusion Models on Synthesizing Training Datasets,
    Data-centric Machine Learning Workshop at ICLR 2024.
  4. S. Yamaguchi and T. Fukuda,
    On the Limitation of Diffusion Models for Synthesizing Training Datasets,
    SyntheticData4ML Workshop at NeurIPS 2023.
  5. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima,
    Regularizing Neural Networks with Meta-Learning Generative Models,
    Data-centric Machine Learning Research (DMLR) Workshop at ICML 2023.

Preprints

  1. S. Enomoto, S. Yamaguchi,
    MultiModal Fine-tuning with Synthetic Captions,
    arXiv, 2025. [Code]
  2. S. Yamaguchi, K. Nishida, D. Chijiwa,
    Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought,
    arXiv, 2025.
  3. K. Adachi, S. Yamaguchi, T. Hamagami,
    Uniformity First: Uniformity-aware Test-time Adaptation of Vision-language Models against Image Corruption,
    arXiv, 2025. [Code]
  4. S. Yamaguchi, S. Kanai, T. Shioda, S. Takeda,
    Multiple pretext-task for self-supervised learning via mixing multiple image transformations,
    arXiv, 2019.
  5. K. Kuramitsu, S. Yamaguchi,
    XML Schema Validation using Parsing Expression Grammars,
    PeerJ PrePrints, 2015

Thesis


Honors

  • Outstanding Reviewer: ICML 2022, NeurIPS 2024 Main Track, NeurIPS 2024 Dataset & Benchmark Track
  • 令和四年度 (2022) PRMU研究奨励賞 (outstanding research award at a Japanese domestic conference)
  • ACML2023 Best Paper Award