Myoungseob Mun
Associate Research Fellow, Korea Institute of Intellectual Property, Republic of Korea
Correspondence to Myoungseob Mun, E-mail: mms1019@kiip.re.kr
Volume 20, Number 4, Pages 131-146, December 2025.
Journal of Intellectual Property 2025;20(4):131-146. https://doi.org/10.34122/jip.2025.20.4.131
Received on August 14, 2025, Revised on August 30, 2025, Accepted on December 03, 2025, Published on December 30, 2025.
Copyright © 2025 Korea Institute of Intellectual Property.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives (https://creativecommons.org/licenses/by-nc-nd/4.0/) which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
The emergence of the innovative AI model DeepSeek, a low-cost and high-efficiency model, is having a significant impact on the global AI industry. By utilizing technologies such as the Mixture of Experts, FP8, and Knowledge Distillation, DeepSeek has successfully reduced hardware costs and development time while demonstrating performance comparable to that of leading models. Following DeepSeek’s success, interest in knowledge distillation for developing cost-efficient AI models has surged. Knowledge distillation is a technique that transfers knowledge from a large-scale “teacher” model to a smaller “student” model, contributing to model lightweighting and faster inference speeds. However, this process has intensified intellectual property disputes. Leading companies like OpenAI and Google have accused DeepSeek of intellectual property theft, claiming it illegally trained on their models, and have raised the possibility of legal action. Previously, IP issues in AI development primarily focused on copyright infringement during the training process, but with fast-followers using knowledge distillation on models from leading groups, the issue is now expanding to data misappropriation. As knowledge distillation becomes a crucial tool for latecomers to secure a competitive edge, it also necessitates a review of whether such actions constitute IP infringement. This paper examines the allegations of DeepSeek’s data misappropriation from an intellectual property law perspective, given the intensifying conflict among global AI developers surrounding knowledge distillation spurred by the DeepSeek case.
Knowledge Distillation, Deep Learning, Model Compression, Copyright Act, Acts of Unfair Competition
No potential conflict of interest relevant to this article was reported.
The author received manuscript fees for this article from Korea Institute of Intellectual Property.