News & Insights

Melanie Ensign Melanie Ensign

Mastering the Art of Privacy Engineering

Julia Child’s TV appearances turned French cooking into a mainstream and lucrative community. PrivacyCode is here doing the same thing for privacy engineering.

Eric Lybeck, Director of Privacy Engineering

Photo by @ellaolsson on Unsplash

Books that were just recipes or knitting patterns were not very exciting, but when Julia Child appeared on television, she was exciting and inspired a lot of people to cook French food. Her books became best sellers. 

So it was with the early internet. The first websites were like static recipes or knitting patterns, and not very engaging. Once there was a community of cooks, pots and pans, food merchants, and eventually competing hospitality restaurant competitors. Things became interesting and lucrative for the community. 

These communities couldn’t exist and couldn’t find subject matter until Vint Cerf added standard addresses to websites and turned them into potential little interactive portals and even independent conglomerate companies. 

We’ve seen the same pattern of events with early efforts to apply technologies to the problems of privacy. Some thought their AI solutions were going to get us there to mark up the “data” and ML would automatically inventory and classify data. There just wasn’t enough context or change in time, and the context and actor could not be understood.

Then there are the solutions that scan the code, again looking for the magical context.

Code scanning, not working? You need to tag the code with the right context. 

With PrivacyCode, we have Privacy Objects, or Tasks. The task markup language allows us to do an analysis of the behaviors of the Task itself. The task is inherently a rule and the language may recognize who, why, where and when an operation is performed on the rule, in relation to the rule, or in combination with the rule.

In other words, context. 

The synergy among Tasks and Rules and Actors are correlated to business outcomes. How does your software engineering activities help you achieve your corporate goals? PrivacyCode provides the answer.

The AI standards bodies are all knitting very lovely patterns. As are the individual data protection compliance rules. PrivacyCode creates an interactive community that brings together lawyers, developers, and marketing folks – to make the unhappy, happy.

We have the platform and the swarm intelligence branch of AI has the right math. The future of PrivacyCode is irresistible. 

Read More
Melanie Ensign Melanie Ensign

The Need For Intelligence Privacy in the Intelligence Age

Intelligence privacy is protecting an individual from the potential negative impact from the use of Artificial Intelligence (AI) and other substantive privacy threats through the use of digital information by synthetic intelligence.

Eric Lybeck, Director of Privacy Engineering

Digital nueral networks

As a privacy engineer, I am increasingly concerned about the threat to privacy in the digital age. It’s one of the reasons I work at PrivacyCode, where we built a flexible platform that enables our customers to manage privacy, security, and AI engineering.

While important, we know the answer to the threat of privacy is not always more laws and government regulation. The GDPR, the California Privacy Rights Acts, the EU AI act, and other regulatory protections are not enough and will not be enough to protect privacy in this new Intelligence Age. We need to think about privacy in a more holistic way, encompassing both substantive privacy and informational privacy, and consider a new type of privacy for this new age, intelligence privacy. 

Substantive and informational privacy are like the two interfaces of a software system, inseparable and mutually dependent. Our ability to live in the information age would be destroyed without our computers, presenting information to us through their screens, and communicating with the rest of the world through their communications. 

Just as our mobile phone cannot function without both of these interfaces, respect for substantive and informational privacy are both essential for human society to live freely and without fear. 

We need informational privacy to protect our substantive privacy when we are making decisions about our health, finances, or relationships. We also need substantive privacy to protect our data privacy, for example, by preventing companies from collecting and selling our personal data without our consent. We also need intelligence privacy.

Intelligence privacy is protecting an individual from the potential negative impact from the use of Artificial Intelligence (AI) and other substantive privacy threats through the use of digital information by synthetic intelligence. Through the use of AI, threats once possible only through the analysis of an individual’s personal information, are now possible without any knowledge of that individual. 

AI systems built without sufficient controls can be used to conduct particularly insidious discrimination against individuals. Perhaps the outputs are used and disguised as concern or solicitude. A fantasy? China’s social credit system is a national credit rating and blacklist system that assesses the trustworthiness of individuals based on their behavior. Individuals given higher social credit scores will be eligible for rewards. Individuals with low scores will be punished, such as being banned from traveling or staying in certain hotels.

It is easy to imagine other systems creating other discriminatory behaviors that can be normalized and accepted through social conditioning. A society might imagine it could predict the lifetime capabilities of five-year children using advanced artificial intelligence algorithms. The same society may promote such a system as it would allow for the heavily gifted children to be provided special opportunities. The impact however may be to deny that child the opportunity to make their own choices for their entire life. Such a system operating at a state-level will likely benefit a few at the expense of many. Those left behind may never benefit, history is full of examples of authoritarian rulers or societies who committed many horrors.

Privacy, security, and AI engineering are essential understanding for software engineers who must take a leadership role to help to protect society against threats from intelligence privacy failures. 

PrivacyCode’s platform helps the software engineer - and the AI engineer.

One use of the platform is to consider threats at the design stage. These threats can be non-obvious and are best addressed in system and algorithm design. Well designed systems protect intelligence privacy and protect users from the serious harms that can arise from the use of personal information by an adversary. 

Here are some additional, tangible actions that you, as a software and AI engineer, can take:

  1. Advocate in your company for establishing guiding principles for responsible AI development. To help develop these guiding principles, consider the NIST AI Risk Management Framework, the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, and the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. 

  2. Consider the potential risks and ethical implications of the AI systems during design. Perform security, privacy, and artificial intelligence threat modeling in a software platform, such as PrivacyCode, that allows for threats to be identified and mitigations tracked through the system lifecycle. 

  3. Document the AI system engineering activities so that decisions and actions are improved upon over time.

There has been no more interesting time to be a software engineer than the present. Through your efforts, we can build solutions that protect intelligence privacy.

Read More
Melanie Ensign Melanie Ensign

Extending PrivacyCode to Address Responsible AI

PrivacyCode’s AI/ML engine and Privacy Object Library enables any organization to manage Responsible AI challenges by connecting them to business goals.

By Eric Lybeck, Director of Privacy Engineering

AI is now ubiquitous and has already changed the way we live and work. We buy an increased amount of goods and services online, sometimes not even realizing AI is providing us recommendations. Social networks use advanced algorithms to keep us engaged. In the field of medicine, AI offers incredible progress in the early detection and treatment of disease. 

As with any new technology, it’s important for organizations to prioritize their investments, address threats and risks posed by AI, and measure results in an effective manner.

PrivacyCode, working with input from our design partners and leveraging our AI/ML engine, created our Privacy Object Library that enables any organization to manage Responsible AI challenges by connecting them to business goals.

Link Responsible AI to Business Goals

The first step in building a Responsible AI program is identifying your desired outcomes. Whether the outcome is to improve customer retention, increase efficiency, or accelerate innovation, as long as you know the outcomes, you can start to track the AI initiatives and their impact. 

For example, the goal may be to accelerate innovation in a specific market or vertical, so projects that involve incorporating AI-powered or defined capabilities in your products, may align to this corporate goal. Tracking, measuring, and proving that impact is what PrivacyCode.ai was built for. 

Use a Common Enterprise-wide Framework

Despite popular headlines, AI is not an unregulated “Wild West.” Existing regulations already govern AI’s use cases and derivatives. Consider that all internal corporate policies apply as well, so you need to keep in mind all of these cross-disciplinary requirements related to security, privacy, ethics, and non-discrimination, to name a few. 

The commonly-cited uncertainty of AI regulation often comes from new or emerging laws and frameworks that either add to or intersect with these existing requirements.  For example, there are new frameworks, such as the NIST AI Risk Management Framework and proposed new laws such as the EU AI Act. This makes it increasingly important, yet difficult for organizations to stay up-to-date on the latest developments. PrivacyCode.ai was built for this too.

We use AI Machine Learning technology to quickly update our Privacy Objects library with new and emerging frameworks and requirements. Then we distill them into repeatable, reusable tasks that business teams can own and implement. Our Responsible AI library, Ethical and Responsible AI Essentials, provides the foundation of an enterprise-wide framework. 

Design, Build, and Maintain Responsible AI Systems

Our customers use PrivacyCode.ai to manage Responsible AI projects, and solve problems such as validating the AI training dataset compliance, communicating how AI systems work, and proving fair and non-discriminatory results.

• • •

If you are interested in more information about how you can improve your outcomes with Responsible AI, you can contact our team here.

Read More
Melanie Ensign Melanie Ensign

Mind the Gap

The risks and consequences that come with being entrusted with people’s personal data have never been greater, so making sure you have the right teams and the right tools to do so is critical. Once you do, the stress of having to “mind the gap” recedes and you can move forward with confidence.

by Ian Oliver, Distinguished Member of Technical Staff, Bell Labs

Finally, a solution to the biggest problem in privacy management!

In London, whenever you take the tube (the subway) you’ll notice a somewhat ominous recorded voice telling you to “mind the gap;” in other words to stay clear of the space between the platform and the train. Otherwise, ouch.

I thought of this the other day when someone asked me what I thought was most challenging for organizations when trying to build (or rebuild) a privacy program. I’m referring to the gap – Ok, let’s call it a chasm – between privacy legal and compliance experts who create policies and procedures, and the architects and engineers who must implement them within products, data management protocols and other activities that are core to doing business today.

To anyone who has worked on either end of a privacy team, the disconnect between those who create the policies and the engineers who must operationalize them is well-known, albeit not often openly discussed. Instead, endless meetings, email threads, and PowerPoint decks go back and forth, in a well-intended, but often futile attempt for these two very different sets of experts to get on the same page. This gap is much more than frustrating and inefficient – it can be costly and even dangerous when it involves protecting the private information of individuals. The damage goes beyond penalties. The business impact (often overlooked in media coverage), is significant. Months or years spent designing and launching products and data mining strategies that are then found to violate privacy regulations are sunk costs that could be avoided – if privacy is designed into products from the outset. And that means lawyers and developers need to communicate.

It’s not like these teams don’t want to talk to or understand each other. They just don’t know how. They speak different languages, and they are focused on different objectives - yet each is held responsible for the successful implementation of a sound privacy strategy that follows the law and will protect a company’s brand.

An old model for a new world

Historically, privacy programs were set up from a legal perspective; understand the regulations, write a policy, hand it off to others to implement and done.

This still-entrenched process was designed for a world that no longer exists. Today, personal data is the currency that drives revenue for most businesses. Understanding how this data is used and protected – and how systems are built to do so effectively – is essential for privacy and legal experts. The days of “we have our privacy policy, so we’re compliant,” are over.

As well they should be. Imagine if an architect only just designed a building, without understanding the engineering required to make that building safe. Rather, an architect designs with structure in mind, visits the building site, collaborates closely with the construction team, and ensures their original vision is implemented in a way that follows all the required regulations. When was the last time you saw a privacy lawyer sitting down with a programmer to understand the technical implementation of a policy? Thankfully, that’s starting to change.

Recently, I was pleased (and admittedly surprised) to see this excerpt from a privacy panel at the RSA Conference 2022. Chief Privacy Officers from some of the giants of tech, including Apple and Google, participated in a keynote panel. This excerpt from coverage of the event perfectly articulates where I believe we are today:

The role of engineers in actualizing the governance of privacy policies and procedures was also addressed in the session. Apple’s Horvath said that deep technical knowledge is critical to privacy, such as understanding databases. “The best friend a privacy person has in a company are security and privacy engineers,” she stated.

Enright concurred, commenting that:

“the privacy engineering function at Google is perhaps the most fundamental when I think about our product strategy. The way things are evolving is about more than meeting the requirements of changing laws.”

-James Coker, Infosecurity Magazine, “RSAC: The Growing Relevance and Challenges of Privacy”

In my mind, this demonstrates an awareness at the highest levels of some organizations that connecting the two ends of the privacy spectrum to manage the tsunami of data that is their bread and butter is imperative. So how exactly, can they do that? Where is the structure and tool that can get them there?

The bridge that closes the gap

Let’s be clear: lawyers are not about to become engineers, and vice-versa. However, each discipline can – and must – be able to see the bigger picture of what they are creating together and be able to collaborate throughout the process. To date, there has not been a practical and accessible way for them to do this.

The solution, in my mind, has always been a tool that is accessible for everyone involved in the process of planning and operationalizing a privacy program efficiently and without ambiguity. I believe Privacy Code and their SaaS platform does that-and in some pretty amazing ways. (Full disclosure: I am an Advisor to Privacy Code and honored to be one.)

There are many things to like about the Privacy Code platform and if you’d like to see how it works firsthand, contact the team. But at a very high level, I like the fact that it gives me a structure in which to operate. It lets me see – as an engineering/technical person – exactly what I need to do, and most importantly, why I am doing it. And it lets everyone move between looking at the project on a developer level and business level. This is so critical. As I said, these two domains see and think differently. But if you can give them a lens, as Privacy Code does, to see the same project through their specific needs, you save an enormous amount of time. And time, as we all know, is money.

There’s another reason that I think Privacy Code is the kind of solution the privacy world has been waiting for: this platform was built by two impressive entrepreneurs who know privacy. They’ve lived it, worked it, and one wrote the definitive book about it. They built something based on their own experiences as corporate executives and product leaders trying to bridge the gap between privacy teams and developers. Which is why it works.

Risk vs. Compliance: The Future of Privacy

I’ll sign off with a note about where I think we’re headed. Privacy regulation, laws and penalties are only going to increase. The use of consumer data for business is going to get more complex. So privacy teams within organizations need to quickly shift their mindset from one of “being compliant” to “managing risk.” This may sound subtle, but it’s actually a profound evolution from where privacy programs have historically been.

The risks and consequences that come with being entrusted with people’s personal data have never been greater, so making sure you have the right teams and the right tools to do so is critical. Once you do, the stress of having to “mind the gap” recedes and you can move forward with confidence.


Dr. Ian Oliver is a Distinguished Member of Technical Staff at Bell Labs working on Trusted and High-integrity Cyber Security applied to 5G and 6G mobile technologies, NFV, Edge and IoT devices with particular emphasis on the safety-critical domains, such as future railway, medical devices and medical systems.

He is the author of the book "Privacy Engineering: A data flow and ontological approach" and hold over 200 patents and academics papers.

Read More

Media inquiries

Media@PrivacyCode.ai