HubSpot is coming to the rescue!
But first ask yourself this question:
Does your team have clear, structured guidelines or principles on AI usage?
The overwhelming majority of people will likely still answer “No.”
The conversation around AI safety and transparency at work, however, is starting to gain more traction in our industry.
At RP, we continue to push the importance of discussions like these – which is why we released two things over the past few weeks:
Both of these resources were designed to help members of the MOPs community (and entire organizations) implement a system of transparency, accountability, and safety when it comes to AI integration in the workplace.
And we’re excited to see other organizations embracing this conversation as well. The most recent example being HubSpot’s article: “The Complete Guide to AI Transparency [6 Best Practices].”
Below are the 6 steps HubSpot has come up with for creating a transparent AI policy:
Step 1: Define and align your AI goals.
Step 2: Choose the right methods for transparency.
Step 3: Prioritize transparency throughout the AI lifecycle.
Step 4: Continuous monitoring and adaptation.
Step 5: Engage a spectrum of perspectives.
Step 6: Foster a transparent organizational culture.
We think these steps provide a great foundation for organizations to build on.
In terms of following these steps in the real world, our own resources fit nicely as complementary tools that will expedite the process.
For example, for “Step 1: Define and align your AI goals”, our template on AI guidelines and principles comes in. When you sit down to create tangible documentation that clearly describes your organization’s AI goals, our template provides a robust starting point for you to consider.
And we’re constantly experimenting with AI in different ways.
One AI use case can drastically differ from another from a safety and transparency perspective. Which is why our MOPs AI Advisor can be a big help when it comes to “Step 2” all the way to “Step 5” of HubSpot’s best practices.
You can lean on our custom GPT as a second perspective on your experiments, ensuring you chose the right tools and consider additional privacy and safety implications you may run into. You can re-prompt the advisor to continuously monitor your experiments, adapting your strategies as needed based on its feedback.
While MOPs AI Advisor certainly isn’t designed to replace the perspectives of actual people in your organization, it can shine a light on potential viewpoints that others around the company may hold – which you can then verify through an open dialogue with those people.
All of these things contribute to “Step 6: Foster a transparent organizational culture.”
This happens over time, but clarity and consistency is the key.
Also, if we’ve learned anything from AI so far, it is that the situation is fluid. Things can change overnight, so it is important to understand new developments and how they impact your team.
We’re grateful to HubSpot for joining us in bringing important conversations like these to the forefront.