Our Statement on AI Usage

Taking an ethical approach to AI matters because, like everything else in business, our choices now have ripple effects on people, communities, and society. AI isn’t just another tool; it can shape decisions, influence behaviour, and even reinforce bias if we’re not careful. 
That’s why we must think beyond short-term gains and consider the bigger picture. We want our business practices to reflect our values; the same goes for how we use AI. When we build in ethics from the start, we’re not just protecting our reputation; we’re helping to create a future that’s fairer, more inclusive, and genuinely better for everyone. 

Sarah Whale, Founder of Profit Impact

Introduction

This Usage Statement outlines how we intend to use Artificial Intelligence (AI) responsibly in our operations. We view AI not solely as tools for productivity, but also as new technologies which require accountability. This document defines the governance structure, operational procedures, and risk mitigation strategies that guide our use of AI. It is the product of substantial research that we have conducted to better understand the emerging prominence of AI in the business sphere. The AI space is constantly evolving, and this is reflected in our Usage Statement. 

1. Governance & Oversight Framework

1.1 AI Oversight Officer (AIOO)

We appoint a designated AI Oversight Officer responsible for:

  • Ensuring compliance with internal and external AI standards.
  • Coordinating an AI Update Taskforce to manage policy reviews and urgent amendments.
  • Reviewing flagged incidents and providing guidance on appropriate responses.

1.2 AI Update Taskforce

We appoint a group responsible for:

  • Conducting periodic reviews of this AI usage policy.
  • Monitoring significant developments in AI technology, regulation, and data accessibility.
  • Issuing interim updates in response to emerging issues.

2. Ethical Use and Human Oversight

2.1 Content Review Protocol

  • All AI-generated content (text, image, video) must undergo human review before it is presented to customers, partners, or online.
  • Team members are responsible for checking AI output for factual accuracy, bias, inappropriate assumptions, and tone alignment.

2.2 Fairness and Inclusion

  • We remain conscious of potential algorithmic bias in AI tools.
  • When creating visual or textual content, we work with the understanding that representation is fair, inclusive, and suitable for diverse audiences. This is aided through our oversight process.

3. Legal and Regulatory Compliance

3.1 Provider Terms of Service (TOS) Adherence

  • All AI tools must be used in compliance with their respective Terms of Service, as outlined by the provider.
  • Particular care is taken in relation to copyright, re-use, and the licensing of AI-generated assets.

3.2 Anticipation of Legal Change

  • Due to ongoing uncertainty around intellectual property and AI regulation, our Update Taskforce monitors legal developments and preps for adaptation.
  • We prioritise transparency and consent in data usage when configuring or using AI-powered analytics.

4. Environmental Sustainability

4.1 AI Tool Selection Criteria

We prioritise tools with clearly stated environmental goals, such as:

  • Reducing energy and water use in data centres (or at least noting a longer-term plan).
  • Publicly available sustainability commitments or environmental impact disclosures.

4.2 Emissions Tracking Integration

  • In the near future, we aim to integrate AI-related emissions into our broader environmental reporting as tracking tools improve.
  • Until then, we apply the best practices in sustainable business operations; for example, using AI tools only when they provide clear value.

5. Employee Training & Engagement

5.1 Foundational AI Literacy

All employees will receive basic training on:

  • What AI is and how it works.
  • Ethical considerations and responsible usage.
  • The company’s approved AI tools and their purposes.

This training will be included in the onboarding process for all new employees.

5.2 Continuous Learning Culture

  • Employees are encouraged to follow trusted AI sources for information. Blogs by OpenAI, Google, and Microsoft provide excellent, concise updates.
  • Resources are shared monthly, with periodic refreshers and ad hoc sessions following significant AI developments.
  • Records of training will be maintained

6. Risk Management & Incident Reporting

6.1 Risk Matrix

We maintain a live risk matrix that categorises:

  • Legal Risks: Intellectual property ownership, data misuse.
  • Social Risks: Bias, misinformation, lack of diversity.
  • Environmental Risks: Carbon and water footprint of AI tools; as noted, when clearer data emerges, we can achieve this more accurately.

6.2 Incident Reporting Process

  • Any staff member may report an AI-related concern via communication with the AIOO.
  • The AIOO investigates all flagged cases and may escalate issues to the Update Taskforce.

7. AI Tool Evaluation Process

Before adopting any new AI tool, it must undergo a formal evaluation that inquires:

  • What problem does it solve or what value does it add?
  • Are there concerns around misinformation, bias, or social harm?
  • Are the supplier’s environmental policies transparent and adequate?
  • How does it handle data privacy and user consent?
  • Are there any risks or limitations within the terms of service?
  • Were less intensive or non-AI alternatives explored?

A short report is filed before adoption, and all tools are reassessed annually. Team members can either complete this form individually or escalate to the AIOO. All forms must be verified by the AIOO.

8. Measurement & Success Metrics

To ensure accountability and continuous improvement, we track:

  • Productivity Gains: Time saved or outputs improved via AI tools.
  • Carbon Impact:  Changes in carbon reporting as AI emissions tracking becomes feasible.
  • Diversity Checks:  Outcomes of human reviews of AI content flagged for bias.
  • Policy Awareness: Completion rates of employee training.
  • Flagged Incident: Number and nature of concerns raised through reporting.
  • Risk Matrix: Metrics are shared with the board quarterly

This may be recorded via written reports or physical meetings.

Conclusion

AI transcends being merely a means for heightened productivity; its use is a responsibility. At Profit Impact, we embrace AI to enhance our work while holding ourselves to the highest standards of accountability. Our detailed governance framework ensures responsible implementation, regular review, and alignment with our core values. We believe this approach sets us apart as leaders in the sustainable, ethical, and human-centric adoption of AI; we would love to help other businesses do the same.

Document written by Freddie Whale, Impact Intern, Profit Impact.

I would like to thank Edward Falzon and Tim Dee-McCullough for collaborating with me in the process that has created this document. Their advice was invaluable.

Edward Falzon: 

Chair, Board Advisor, Non-executive Director, Coach, Mentor, Consultant - Financial Services & Beyond

https://uk.linkedin.com/in/edward-j-falzon

Tim Dee-McCullough:

 Sustainability, governance and reporting advisor | Founder of Ancoram | IoD Policy and Governance Ambassador | FCCA | FRSA

https://www.linkedin.com/in/timdee-mccullough/