Yes, You Do Need an AI Policy

Of our team, it’s possible I’m the one most hesitant to incorporate AI into my work – I think perhaps I saw the Yul Brynner “Westworld” when I was way too young. But these tools are becoming increasingly popular and an undeniable part of our future – a future that is upon us.

So, knowing resistance was futile, and after team discussion around the need, I did some introductory research on where the world is on AI policies for the workplace. Long story short: if you use any AI programs (and you likely do, even if they’re things you’re not used to calling AI – Grammarly, for instance), you should have a policy.

You have room for your own policy to be fully customized to your organization and your work.  Here, however, are some aspects I recommend considering:

Define Your Why

I find that this aligns with a question that I think is core to policy creation: What are you protecting? For our team, when considering the use of AI, it seemed to come down to 2 key pieces (perhaps an over-simplification, but all points came back to these): the integrity of our work and the well-being of our clients. For the integrity of our work, we want everyone we engage with to know that we will not be using these tools to replace any of our own expertise – far from that. You’re still getting all of our experience and intentionality. The well-being of our clients is center to all we do, but we bring it up very intentionally here. Some of our clients and their own stakeholders may encounter significant legal implications (current or future) related to the nature of their work. We have made room in our policy to acknowledge this and create practices that do not compromise the work or future of our clients. In many cases, this simply means refraining from utilizing AI tools where we might have otherwise.

Consider AI’s Shortcomings

We know AI isn’t going to produce perfect anything (true for people as well, perhaps, but that’s a different blog). As a recording or transcription tool, for instance, it often mixes up words or attributes them to the incorrect speaker. In other scenarios, sentences don’t have the clarity or the nuance you can provide as an expert in your own subject. There are a few ways we take this into consideration:

  • Disclaimers/acknowledgement of flaws: Where we have offered conversation transcripts for use by clients, we include a statement that says it remains unedited and may be flawed.

  • Read, refine, and rewrite as necessary: When serving as a content generation or synthesis tool, for instance, we acknowledge that we may use these tools, but that the final products have been fully edited, vetted, and rewritten to fully convey our professional intent.

  • Immediacy is a factor: As an example, if you are (with consent, see more below) recording for future reporting on a subject, processing that content as soon as possible is key. The flaws that can only be pointed out by human experience can easily be lost to time and memory retention.

Consent of All Parties

As part of our commitment to our clients, we want to ensure we were divulging our use of AI throughout our projects, both to our clients and to their stakeholders. Our policy outlines specific moments in the life of a project where we may use AI tools and how we will acknowledge it in those moments and solicit consent prior to moving forward, referring back to our formalized AI policy. With this in mind, in addition to acknowledging our policy, we have written succinct language into our contracts to set the stage from any project’s onset.

Storage

This one is necessary but tricky, as so many of these tools store this information internally and it’s hard to fully understand what that might look like for each. We, of course, note that all content belongs to The Spark Mill. Beyond that, we acknowledge the aforementioned concern, particularly as we consider the above-discussed well-being of our clients. Simply put, know and acknowledge that just because you say certain content is yours, that doesn’t mean you can fully manage the control of it once certain AI tools have been engaged.

Create a Living Document

Do you remember how long ago this – the use of AI and related policies - wasn’t a specific concern? Things are shifting, changing, and growing rapidly in this field. Resources are suggesting revisiting your AI policy at least annually. At The Spark Mill, we’re all about change, and we suspect our relationship with AI will be full of it!

Length

The world is full of exhaustive manuals, and maybe for some things that’s necessary. We want our team and our clients to get this information in our policy, understand it, and move from there. Also, remember, it’s a living document, and you don’t want to revisit 60 pages each year. At present, our AI policy is 2 pages long – specific and succinct and, therefore, functional. It’s easy to pick up, refer back to, and update/amend as needed. (For my own part, I am susceptible to being verbose, so I appreciate your potential urge here. I know it’s hard to resist, but it will save you in the long run.) 

The Tools

We’re choosing to keep a list of the primary programs we’re using (and simple statements of how we’re using them) in our policy document. As a living document, these should be revisited and updated on our regular basis. These provide context for the policy’s details.

The above considerations certainly aren’t exhaustive, but we hope that you will find these useful in creating your own policy!

If you want to take a peek at our policy as part of the development of your own, feel free to reach out!

Previous
Previous

Designing from Intention - Lessons from the SAW Housing Summit

Next
Next

Strategic Plans - They Can't All Be Sexy