News & Insights

Melanie Ensign Melanie Ensign

Mastering the Art of Privacy Engineering

Julia Child’s TV appearances turned French cooking into a mainstream and lucrative community. PrivacyCode is here doing the same thing for privacy engineering.

Eric Lybeck, Director of Privacy Engineering

Photo by @ellaolsson on Unsplash

Books that were just recipes or knitting patterns were not very exciting, but when Julia Child appeared on television, she was exciting and inspired a lot of people to cook French food. Her books became best sellers. 

So it was with the early internet. The first websites were like static recipes or knitting patterns, and not very engaging. Once there was a community of cooks, pots and pans, food merchants, and eventually competing hospitality restaurant competitors. Things became interesting and lucrative for the community. 

These communities couldn’t exist and couldn’t find subject matter until Vint Cerf added standard addresses to websites and turned them into potential little interactive portals and even independent conglomerate companies. 

We’ve seen the same pattern of events with early efforts to apply technologies to the problems of privacy. Some thought their AI solutions were going to get us there to mark up the “data” and ML would automatically inventory and classify data. There just wasn’t enough context or change in time, and the context and actor could not be understood.

Then there are the solutions that scan the code, again looking for the magical context.

Code scanning, not working? You need to tag the code with the right context. 

With PrivacyCode, we have Privacy Objects, or Tasks. The task markup language allows us to do an analysis of the behaviors of the Task itself. The task is inherently a rule and the language may recognize who, why, where and when an operation is performed on the rule, in relation to the rule, or in combination with the rule.

In other words, context. 

The synergy among Tasks and Rules and Actors are correlated to business outcomes. How does your software engineering activities help you achieve your corporate goals? PrivacyCode provides the answer.

The AI standards bodies are all knitting very lovely patterns. As are the individual data protection compliance rules. PrivacyCode creates an interactive community that brings together lawyers, developers, and marketing folks – to make the unhappy, happy.

We have the platform and the swarm intelligence branch of AI has the right math. The future of PrivacyCode is irresistible. 

Read More
Melanie Ensign Melanie Ensign

The Need For Intelligence Privacy in the Intelligence Age

Intelligence privacy is protecting an individual from the potential negative impact from the use of Artificial Intelligence (AI) and other substantive privacy threats through the use of digital information by synthetic intelligence.

Eric Lybeck, Director of Privacy Engineering

Digital nueral networks

As a privacy engineer, I am increasingly concerned about the threat to privacy in the digital age. It’s one of the reasons I work at PrivacyCode, where we built a flexible platform that enables our customers to manage privacy, security, and AI engineering.

While important, we know the answer to the threat of privacy is not always more laws and government regulation. The GDPR, the California Privacy Rights Acts, the EU AI act, and other regulatory protections are not enough and will not be enough to protect privacy in this new Intelligence Age. We need to think about privacy in a more holistic way, encompassing both substantive privacy and informational privacy, and consider a new type of privacy for this new age, intelligence privacy. 

Substantive and informational privacy are like the two interfaces of a software system, inseparable and mutually dependent. Our ability to live in the information age would be destroyed without our computers, presenting information to us through their screens, and communicating with the rest of the world through their communications. 

Just as our mobile phone cannot function without both of these interfaces, respect for substantive and informational privacy are both essential for human society to live freely and without fear. 

We need informational privacy to protect our substantive privacy when we are making decisions about our health, finances, or relationships. We also need substantive privacy to protect our data privacy, for example, by preventing companies from collecting and selling our personal data without our consent. We also need intelligence privacy.

Intelligence privacy is protecting an individual from the potential negative impact from the use of Artificial Intelligence (AI) and other substantive privacy threats through the use of digital information by synthetic intelligence. Through the use of AI, threats once possible only through the analysis of an individual’s personal information, are now possible without any knowledge of that individual. 

AI systems built without sufficient controls can be used to conduct particularly insidious discrimination against individuals. Perhaps the outputs are used and disguised as concern or solicitude. A fantasy? China’s social credit system is a national credit rating and blacklist system that assesses the trustworthiness of individuals based on their behavior. Individuals given higher social credit scores will be eligible for rewards. Individuals with low scores will be punished, such as being banned from traveling or staying in certain hotels.

It is easy to imagine other systems creating other discriminatory behaviors that can be normalized and accepted through social conditioning. A society might imagine it could predict the lifetime capabilities of five-year children using advanced artificial intelligence algorithms. The same society may promote such a system as it would allow for the heavily gifted children to be provided special opportunities. The impact however may be to deny that child the opportunity to make their own choices for their entire life. Such a system operating at a state-level will likely benefit a few at the expense of many. Those left behind may never benefit, history is full of examples of authoritarian rulers or societies who committed many horrors.

Privacy, security, and AI engineering are essential understanding for software engineers who must take a leadership role to help to protect society against threats from intelligence privacy failures. 

PrivacyCode’s platform helps the software engineer - and the AI engineer.

One use of the platform is to consider threats at the design stage. These threats can be non-obvious and are best addressed in system and algorithm design. Well designed systems protect intelligence privacy and protect users from the serious harms that can arise from the use of personal information by an adversary. 

Here are some additional, tangible actions that you, as a software and AI engineer, can take:

  1. Advocate in your company for establishing guiding principles for responsible AI development. To help develop these guiding principles, consider the NIST AI Risk Management Framework, the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, and the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. 

  2. Consider the potential risks and ethical implications of the AI systems during design. Perform security, privacy, and artificial intelligence threat modeling in a software platform, such as PrivacyCode, that allows for threats to be identified and mitigations tracked through the system lifecycle. 

  3. Document the AI system engineering activities so that decisions and actions are improved upon over time.

There has been no more interesting time to be a software engineer than the present. Through your efforts, we can build solutions that protect intelligence privacy.

Read More

Media inquiries

Media@PrivacyCode.ai