2024 IEKTopics|Enhancing Risk Management in the Era of AI Developing Trustworthy AI and Cybersecurity Regulations

Preface

 

The rapid development of AI has brought tremendous development opportunities to the industry and has become a key technology for businesses to promote digital transformation. However, this technological innovation is accompanied by a series of worrying risks, including ethical issues, information security, and AI trustworthiness. In particular, the recent rise of generative AI and large language models has brought astonishing applications, but the escalating scale and complexity of these models have made it difficult to fully control how AI operates and the results it generates. This becomes a new security threat because these technologies could be exploited by threat actors.

 

The World Economic Forum (WEF), in The Global Risks Report 2024,has warned that AI-generated misinformation and disinformation will overtake extreme weather as the biggest threat facing the world during the next two years. This warning is not only a forecast of the future, but also an accurate reflection of the current state of society.

 

Awareness of AI Risks from Micro to Mega Scale

 

In our daily lives, the risks brought by AI are everywhere. From fake messages circulating on social media platforms to false cases generated by AI in the legal profession, AI-generated misinformation is affecting our lives at an unprecedented speed and scale. These include false messages, fraudulent content, fake images, and even deepfake videos. Things are even more sophisticated now that the Internet is so popular that people are vulnerable to being tricked by what’s real and what’s not.

 

The corporate sectoris also not immune to this AI storm. Hackers have used AI to generate new ransomware, which increase the frequency of invasion and expand the scope of victimization, causing enterprises to face serious data leakage risks. Regarding the AI system itself, it faces multiple threats such as data poisoning, malicious modification, model theft, antagonistic attack, etc. This forces enterprises to strengthen the security protection of AI systems in all aspects, from data to models.

 

What’s even more worrying is that these problems have risen to the level of national security. A society inundated with false information over time, akin to the “gaslighting effect” described in psychology, can lead topublic manipulation, which in turn affects election outcomes and even raises risks of geopolitical and social instability. When the digital environment is flooded with information whose authenticity is difficult to discern, it undermines the credibility of the information and shakes society’s trust in the digital world as a whole.

 

AI Crime Becomes the New Normal, Prompting International Tech Companies to Proactively Fight Fraud

 

In the face of these challenges, international tech giants have begun to take actions. Take OpenAI for example—they have launched a series of measures to deal with possible AI fraud in the busy election year of 2024 worldwide. These measures include“restrictions”, which prohibits the use of their generative AI tools for campaigning or lobbying. They also conduct “Red Team Testing”, which simulates cybercriminal attacks to develop stronger security measures, and implement open standards for digital content certificates, which allows users to track the origin and modification history of images. Additionally, ChatGPT-generated texts are accompanied by the source and link as a “credibility label” for verification, and the DALL-E model uses a “provenance classifier” (similar to watermarking) to help users identify the images it generates.

 

In parallel, Microsoft has developed a Responsible AI Standard and related assessment tools, providing product development teams with six guiding principles: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness. At the same time, they launched Copilot for Security, which provides acomprehensive end-to-end security solution that integrates a number of information security products to help users minimize the output of harmful or inaccurate content when developing models.

 

Taiwan’s TAIDE Program Improves Local AI Development Environment

 

In Taiwan, the government also recognizes the importance and potential impact of AI on national security. The Trustworthy AI Dialogue Engine (TAIDE) was launched to create Taiwan’s own trustworthy generative AI dialogue engine base model. This project brings together the National Applied Research Laboratories, Ministry of Digital Affairs, Academia Sinica, university professors, and experts from various fields. The goal is to allow the government or industry to choose the appropriate model size and computing power according to their demand, and to train the model themselves or build a model for internal application. The development of TAIDE is expected to improve the model’s ability to respond in traditional Chinese and its performance in specific tasks. Moreover, the project aims to build a sound AI development environment in compliance with relevant legal systems, testing standards, and assessment tools to enhance the public’s trust.

 

At the regulatory level, in the face of the challenges of risk management and cyber resilience in the AI era, countries are continuing to publish regulations and standards to regulate products and application services that use AI.  Specific certification standards and attribution of responsibility are proposed to give users a solid basis for making informed judgments. Enterprises have also put forward guidelines for responsible AI, providing guidance for employees and users in areas such as ethics of use, information privacy, and application of generated information. The guidelines, in conjunction with AI Operations and managements, can reduce harmful output and enhance data security and privacy. The government has also developed counseling measures to guide enterprises to introduce the concept of “Trustworthy AI” and AI risk management measures into AI products and services. This allows enterprises to grasp the global trends in AI, support AI product and service development, and consider incorporating social responsibility into their technology development strategies.

 

At the technology level, the industry is developing automatic detection and source credibility labeling technologies to strengthen cybersecurity risk management in applications such as deepfake image and false message detection, digital identity authentication, and malicious email processing. AI companies are being asked to benchmark their models when releasing them so that the AI community can understand the technical performance of the models. More importantly, model evaluation now goes beyond functionality to include responsible performance. Developers need to deliver trustworthy technologies and services to boost user confidence.

 

Conclusion

 

AI demonstrates impressive technological capabilities, yet it also presents significant risks and potential dangers. As we explore the diverse applications of AI, it is crucial for the government, businesses, and the society to carefully assess these factors. Only by leveraging technology and regulations can we be well-prepared to embrace the opportunities and challenges of the AI era.

本文檔案:2024 IEKTopics|Enhancing Risk Management in the Era of AI Developing Trustworthy AI and Cybersecurity Regulations下載檔案