Home / Sécurité et cybersécurité / AI and integrity: Bruce Schneier sounds the alarm in Montreal

AI and integrity: Bruce Schneier sounds the alarm in Montreal

Author

Visiting Montreal, American cryptographer Bruce Schneier raised the alarm on “integrity,” which he identifies as our greatest challenge as artificial intelligence systems move from prediction to action.

Do you remember the global chaos of July 2024? A simple update error by the company CrowdStrike paralyzed airports and hospitals, grounding millions of people. For cybersecurity expert Bruce Schneier, who spoke about the event last week at the University of Montreal, this incident was a wake-up call: we have entered the “age of integrity.”

AI is no longer just talking—it’s acting

For thirty years, cybersecurity focused on two things: protecting our secrets (confidentiality) and keeping computers running (availability). But today, with AI and connected devices, everything is changing. Machines no longer just suggest movies—they are starting to drive our cars, manage our power grids, and administer medications.

“Integrity is about ensuring that data is accurate at the moment of collection, comes from a trusted source, and has not been tampered with, spoofed, or replayed,” Schneier explains.

This is what Schneier calls “automation of consequences at machine speed.” If an AI makes a mistake, it doesn’t just display an error on a screen—it acts physically in the world. Integrity is simply ensuring that the system produces a “correct” outcome. Without this guarantee, even the smartest AI becomes a danger.

Examples of compromised integrity

For Schneier, integrity is first and foremost about trust in a system’s state. “Restarting a computer, which returns it to a known and healthy state, is itself a mechanism of integrity. Digital signatures are another,” he illustrates. More broadly, “integrity is about ensuring that data is accurate at the moment of collection, comes from a trusted source, and has not been falsified, spoofed, or replayed.”

The expert emphasizes a point often misunderstood: “Integrity does not depend on malicious intent. Just as exposing personal data is a breach of confidentiality even if no one accesses it, a lack of integrity guarantees is already a violation—even if no deliberate manipulation occurs.” In Quebec, this idea resonates closely with the requirements of Bill 25.

Echoing the concerns of Yoshua Bengio and Yann LeCun

This approach to security echoes warnings from researchers like Yoshua Bengio, who regularly alerts the public to the risks of AI systems that could escape human control. While Bengio emphasizes alignment and control issues, Schneier offers a more structural response: make integrity—of data, systems, and processes—a fundamental pillar.

Yann LeCun, meanwhile, advocates for AI capable of modeling the laws of the physical world to reduce reasoning errors. In Schneier’s reading, this requirement translates into a concrete need to ensure that information processed by AI—whether from industrial sensors, aerospace systems, or autonomous vehicles—is authentic, reliable, and unaltered.

Building trust, one rule at a time

The expert’s message is clear: “accuracy enables trust.” For us to one day entrust our lives to AI agents, every step—from captured data to final action—must be verifiable.

However, technology alone is not enough. Just as we created laws to protect our privacy, Schneier stresses the necessity of strict regulations. These rules will compel companies to build reliable systems. The challenge of the next decade will not only be making AI more powerful but ensuring it remains predictable and integrity-assured, no matter what.

Tagged: