The Need For Intelligence Privacy in the Intelligence Age

Eric Lybeck, Director of Privacy Engineering

Digital nueral networks

As a privacy engineer, I am increasingly concerned about the threat to privacy in the digital age. It’s one of the reasons I work at PrivacyCode, where we built a flexible platform that enables our customers to manage privacy, security, and AI engineering.

While important, we know the answer to the threat of privacy is not always more laws and government regulation. The GDPR, the California Privacy Rights Acts, the EU AI act, and other regulatory protections are not enough and will not be enough to protect privacy in this new Intelligence Age. We need to think about privacy in a more holistic way, encompassing both substantive privacy and informational privacy, and consider a new type of privacy for this new age, intelligence privacy. 

Substantive and informational privacy are like the two interfaces of a software system, inseparable and mutually dependent. Our ability to live in the information age would be destroyed without our computers, presenting information to us through their screens, and communicating with the rest of the world through their communications. 

Just as our mobile phone cannot function without both of these interfaces, respect for substantive and informational privacy are both essential for human society to live freely and without fear. 

We need informational privacy to protect our substantive privacy when we are making decisions about our health, finances, or relationships. We also need substantive privacy to protect our data privacy, for example, by preventing companies from collecting and selling our personal data without our consent. We also need intelligence privacy.

Intelligence privacy is protecting an individual from the potential negative impact from the use of Artificial Intelligence (AI) and other substantive privacy threats through the use of digital information by synthetic intelligence. Through the use of AI, threats once possible only through the analysis of an individual’s personal information, are now possible without any knowledge of that individual. 

AI systems built without sufficient controls can be used to conduct particularly insidious discrimination against individuals. Perhaps the outputs are used and disguised as concern or solicitude. A fantasy? China’s social credit system is a national credit rating and blacklist system that assesses the trustworthiness of individuals based on their behavior. Individuals given higher social credit scores will be eligible for rewards. Individuals with low scores will be punished, such as being banned from traveling or staying in certain hotels.

It is easy to imagine other systems creating other discriminatory behaviors that can be normalized and accepted through social conditioning. A society might imagine it could predict the lifetime capabilities of five-year children using advanced artificial intelligence algorithms. The same society may promote such a system as it would allow for the heavily gifted children to be provided special opportunities. The impact however may be to deny that child the opportunity to make their own choices for their entire life. Such a system operating at a state-level will likely benefit a few at the expense of many. Those left behind may never benefit, history is full of examples of authoritarian rulers or societies who committed many horrors.

Privacy, security, and AI engineering are essential understanding for software engineers who must take a leadership role to help to protect society against threats from intelligence privacy failures. 

PrivacyCode’s platform helps the software engineer - and the AI engineer.

One use of the platform is to consider threats at the design stage. These threats can be non-obvious and are best addressed in system and algorithm design. Well designed systems protect intelligence privacy and protect users from the serious harms that can arise from the use of personal information by an adversary. 

Here are some additional, tangible actions that you, as a software and AI engineer, can take:

  1. Advocate in your company for establishing guiding principles for responsible AI development. To help develop these guiding principles, consider the NIST AI Risk Management Framework, the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, and the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. 

  2. Consider the potential risks and ethical implications of the AI systems during design. Perform security, privacy, and artificial intelligence threat modeling in a software platform, such as PrivacyCode, that allows for threats to be identified and mitigations tracked through the system lifecycle. 

  3. Document the AI system engineering activities so that decisions and actions are improved upon over time.

There has been no more interesting time to be a software engineer than the present. Through your efforts, we can build solutions that protect intelligence privacy.

Previous
Previous

Mastering the Art of Privacy Engineering

Next
Next

Customer Case Study