Proceedings of the 5th Management Science Informatization and Economic Innovation Development Conference, MSIEID 2023, December 8–10, 2023, Guangzhou, China

Research Article

Assessing Security Risks in ChatGPT for Academic Writing Scenarios: A Study on Knowledge Dissemination Based on Large-scale Language Models

Download70 downloads
  • @INPROCEEDINGS{10.4108/eai.8-12-2023.2344354,
        author={Zhuo  Luo},
        title={Assessing Security Risks in ChatGPT for Academic Writing Scenarios: A Study on Knowledge Dissemination Based on Large-scale Language Models},
        proceedings={Proceedings of the 5th Management Science Informatization and Economic Innovation Development Conference, MSIEID 2023, December 8--10, 2023, Guangzhou, China},
        publisher={EAI},
        proceedings_a={MSIEID},
        year={2024},
        month={4},
        keywords={artificial intelligence academic writing security risks large-scale language models knowledge dissemination},
        doi={10.4108/eai.8-12-2023.2344354}
    }
    
  • Zhuo Luo
    Year: 2024
    Assessing Security Risks in ChatGPT for Academic Writing Scenarios: A Study on Knowledge Dissemination Based on Large-scale Language Models
    MSIEID
    EAI
    DOI: 10.4108/eai.8-12-2023.2344354
Zhuo Luo1,*
  • 1: Guangzhou Xinhua University
*Contact email: salz@xhsysu.edu.cn

Abstract

In the digital age, the application of artificial intelligence technologies has become ubiquitous. Leveraging the fuzzy comprehensive evaluation method, this study delves into the security implications of using ChatGPT in academic writing environments and delves into the ethical concerns surrounding its deployment as a major language model for knowledge dissemination. The results suggest that, while ChatGPT poses minimal risk in academic settings, certain vulnerabilities, notably in the realm of intellectual property, underscore the need for robust protective measures. This study sheds light on pivotal factors influencing ChatGPT's safety in academic writing, such as data protection, software copyright, network communication standards, and model inference risks. Notably, we underscore the paramount importance of transparency in data processing, which stands as a bulwark for ensuring safety. Alongside, we advocate for meticulous scrutiny of AI-generated outputs to validate their veracity and coherence. In contexts where AI aids in data interpretation or prognostications, hands-on verification and comprehensive reviews are indispensable to uphold both ethical and safety benchmarks.