Artificial Intelligence, Privacy Enforcement

Artificial Intelligence: The Latest Topic to Make Privacy Professionals Sweat

Omar N. Bradley said, “If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.” When he said that, he couldn’t foresee artificial intelligence one day writing pages of code in mere seconds and making our wildest imaginations come true through AI-designed art. All jokes aside, in the last 12 months we’ve seen an onslaught of AI tools flood the market, from ChatGPT to Bard and many more. While it’s enhanced and streamlined many business operations such as hiring employees and helping develop marketing collateral to even helping design websites, AI has brought with it a slew of privacy concerns that you may be considering if your organization utilizes this technology.

The hot topic amongst organizations is weighing the benefits of using AI against potential pitfalls. Apprehension around job automation, weaponization, and biases within the models are cause for concern – the latter of which leads directly into privacy territory. With new technology, it can be difficult to determine where to start with analyzing your practices and potential risks. New York and Connecticut have already blazed the trail for AI regulation and most other states have legislation in the works. Navigating this state & local regulatory minefield will be a complex challenge for organizations. You can play it safe by avoiding the technology altogether until definitive regulatory parameters in your geolocation have been set, but who wants to fall behind in using such a revolutionary tool that could benefit your company?

Don’t forget to subscribe to our newsletter so you will receive our weekly blogs & privacy updates!

Prepare for AI Legislation

To aid you in preparing for what is likely to be a wave of AI-based regulations across the globe, Truyo President Dan Clarke and Trustible Co-Founder Gerald Kierce put together some actionable steps you can take now to begin the work of using AI in an ethical and soon-to-be compliant manner.

Step 1: Discovery

The biggest difficulty organizations face is finding out exactly where and how AI is being used.  Performing discovery helps you ascertain the areas in which your company uses AI, enabling you to create a record of where it’s being used and how. There are different methods for this but two main components you should take into account: scanning & surveying. A scan of your systems, website, databases, etc. can give you a good idea of where AI is being used on the back end. However, you should also survey employees as well to see how they are using AI for day-to-day operations and efficiency. Emerging AI laws aren’t regulating the models so much as they’re regulating the use cases of AI, where the inherent risk is present and highly focused on front-end utilization. As such, an AI inventory is a critical step in documenting your use cases of AI.

Step 2: Risk Assessment

As a privacy or cybersecurity professional, you’re no stranger to risk assessment, and though it is not yet required, performing a risk assessment of your organization’s AI usage is imperative. Assessing the relative risk of AI usage, especially when it’s used to make hiring decisions or any other activity involving PII, will help you identify where you could be at risk of prejudicial practices. AI usage for what we call behind the scenes acts, such as data mapping, for example, is inherently not prejudiced and shouldn’t put your organization at risk of enforcement. The European Union’s AI Act classifies specific use cases (not ML models) into one of 4 risk categories: unacceptable (prohibited), high, medium, and low risk. Organizations should start to gain an understanding of the regulatory requirements for each risk category.

Step 3: Evaluating Usability

As quickly as AI is evolving, there can still be gaps that leave your organization open to risk. For example, consider an AI tool that builds health projections based on pre-2020 data. It wouldn’t be able to account for health trends that emerged from the Covid-19 pandemic, a huge pitfall in that sector. Timeframes for data usage and updating can and should be disclosed without giving away your trade secrets and helps reduce the potential for legal action.

Step 4: Disclosures

While it isn’t necessary to share the specific data that went into a particular AI model, it is generally innocuous to reveal any potential limits with high-level indicators. When in doubt, disclose! The key to staying off enforcement radar is going to align with privacy practices you’re already employing: disclose, minimize, and provide notice where necessary.

Step 5: Monitor Legislation

This isn’t an action item so much as a recommendation. As we all know, privacy legislation moves quickly and regulation around AI will be no different. One thing that we know will be true for all regulations is the increased need for documentation around your AI systems. Keeping in touch with what lawmakers are proposing in your jurisdiction will help you prepare for any laws coming your way.

As always, we will keep you apprised of the trajectory of AI regulations and how it will affect your organization as AI becomes a hot topic in privacy. If you have any questions, please reach out to hello@truyo.com.


Author

Dan Clarke
Dan Clarke
President, Truyo
July 20, 2023

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today