Expert Opinion: The Growing Importance of Preserving AI Value

The Growing Use of Artificial Intelligence in Software Solutions: Opportunities and Challenges

The integration of artificial intelligence (AI) into new software solutions has become a crucial topic across technological, economic, and societal spheres. Since the advent of ChatGPT in November 2022, generative AI has rapidly emerged as a powerful engine of innovation, sparking a wave of development in various industries. However, this rapid growth has also raised profound questions surrounding ethics and trustworthiness. Central to these concerns is the issue of confidence: How much trust can we place in a technology that is designed to assist in decision-making? The primary risk lies in the potential for AI systems to produce subtly inaccurate responses—whether intentionally or unintentionally—that could mislead humans into making hazardous choices.

Today, AI systems are applied to a broad spectrum of tasks, each with its own set of risks if the AI were to malfunction or be manipulated. Here are some notable examples illustrating the potential consequences of AI failures:

AI and Predictive Maintenance in Industry

In manufacturing sectors such as automotive production, AI technologies are employed to analyze data collected from engines and machinery on factory floors. These systems predict optimal replacement times for key components, aiming to reduce downtime and maintain safety. If an AI system becomes compromised—say, by failing to detect a faulty part— the consequences could be severe, possibly leading to accidents or equipment failure. An AI misjudging maintenance needs could result in dangerous situations or costly damages.

Smart Athletic Shoes to Prevent Injuries

Innovative sports footwear now integrate sensor-laden insoles that assess a runner’s gait in real time. These AI-powered devices analyze factors like foot strike and posture, advising the wearer to adjust their stance to conserve energy or avoid strain. Should the AI malfunction, it might overlook improper gait patterns or, worse, encourage biomechanical issues by providing incorrect guidance. Over time, such errors could lead to serious injuries for athletes.

Virus Detection Using AI

In cybersecurity and bioinformatics, AI algorithms are used to detect malicious software or biological viruses by identifying traces of suspicious code or genetic markers. If these detection models are compromised—say, through hacking or deliberate data poisoning—they could miss the presence of malware or harmful pathogens. Such lapses might allow cyberattacks to succeed or biological threats to go unnoticed, with potentially catastrophic results.

AI’s reach spans nearly every domain where decision support is essential. Consequently, safeguarding AI systems at multiple levels is of utmost importance. This involves protecting the training data, the models themselves, and their operational environments.

Protecting the Value of AI

Pertaining to intellectual property, the challenge is to prevent the theft or misuse of valuable innovations. Developing AI applications often requires significant investment—years of work by talented researchers and developers. If competitors or malicious actors manage to steal proprietary code or data, the original creators could suffer substantial financial losses. For example, the Chinese medium-range aircraft, the Comac C919, allegedly developed in record time by drawing inspiration from Western designs without proper authorization, exemplifies these risks.

Since many AI models are open source, the true value lies primarily in training data—the extensive, meticulously labeled datasets that cost substantial resources to compile. These datasets, often containing billions of examples (like “this is a dog” versus “this is not a dog”), are both expensive and critical assets. Protecting them from theft involves sophisticated encryption techniques, which must be rigorously applied both during storage and during active use. Ensuring that data remains secure against unauthorized access or exfiltration is essential to maintaining competitive advantage and integrity.

Securing AI in Operation

Beyond protecting raw data, safeguarding the models’ internal parameters—such as weights and instructions (prompts)—is essential. An attacker might manipulate these settings—a process called “poisoning”—to control or corrupt the AI’s behavior. Preventative measures include implementing robust access controls, verifying prompt integrity, and deploying defenses against such malicious interventions. Ensuring AI safety requires continuous monitoring and protective mechanisms to prevent unauthorized modifications that could cause the AI to behave unpredictably or maliciously.

Safeguarding the Deployment Environment

Perhaps the most critical security challenge involves the entire environment in which AI is deployed. This encompasses software libraries, runtime infrastructure, and the underlying hardware. The attack surface extends across all these layers, making it vital to secure every component. Attackers could exploit vulnerabilities in deployment scripts or libraries, gaining access to manipulate or extract sensitive information. Incorporating security libraries and practices into each stage of AI development and deployment is key to preventing intrusion attempts, data theft, or intellectual property exfiltration.

In summary, every constituent of an AI system—training data, model parameters, and supporting software—must be vigilantly protected against theft, tampering, and malicious modifications. Only through comprehensive security measures can AI be trusted as a reliable asset that underpins innovation, guarantees data integrity, and fosters long-term value for organizations. Building such resilience is essential to turning AI from a promising technology into a resilient pillar of future business strategies.

* Cyrille Ngalle is Vice President of Software & Data Protection at Quarkslab.

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.