Artificial Intelligence Policy
We developed this manifesto to communicate our views on AI to our clients, staff and partners, to develop an AI policy, and to set down a list of working principles and practices that would guide us in its day to day use. We believe Artificial Intelligence (AI) is a transformative technology that, when used responsibly, can empower our team, enhance our services, and deliver greater value to our clients. The purpose of this manifesto is to articulate our beliefs, define our stance, and guide the integration of AI into the fabric of LOW Associates’ work. We will not waste time debating AI’s importance—it is historic in scope and inevitable in impact. We do not intend to offer services our clients can or prefer to do better in-house with AI. Instead, we will focus on what AI cannot do: strategic thinking, original insights, genuine creativity, and achieving real-world outcomes. We will use AI to optimise project management, streamline internal processes, and enhance our analytical capabilities. We will become experts in this fast-moving domain, understanding its strengths, recognising its limits, and embedding it deeply into our culture to better serve our clients. We will lead with confidence, not follow with hesitation. We will always use AI transparently and honestly, clearly informing stakeholders when and how it is applied.
This policy outlines LOW Associates' guiding principles and responsibilities concerning the development, use, and oversight of Artificial Intelligence (AI) across both internal operations and client-facing services. The aim is to promote a consistent, ethical, and practical approach to AI adoption that reinforces our organisational purpose and safeguards stakeholder interests. AI technologies present transformative opportunities for enhancing efficiency, creativity, and data-driven decision-making, and we are committed to integrating them in ways that reflect our professional integrity.
The policy is designed to ensure that all AI deployments are ethically sound, operationally secure, and strategically aligned with our overarching mission. This includes proactive engagement with risk mitigation, regulatory compliance, and the promotion of human-centric design in all AI applications. It supports our core values and reflects our obligations under relevant legal and regulatory frameworks, such as the General Data Protection Regulation (GDPR) and the EU AI Act, as well as our internal standards of excellence.
Purpose
This policy applies to all employees, contractors, temporary staff, interns, and associated freelancers who engage with AI tools, systems, or projects on behalf of LOW Associates. Engagement includes the use, management, development, deployment, or procurement of AI-related technologies.
Policy Aims
Transparency: We are open about where and how AI is used in our operations and services. Clients will always be informed when AI is involved in decision-making processes. Our AI systems are designed to be explainable, ensuring that outcomes can be justified and understood by humans.
Accountability: Human oversight at every stage of AI design, deployment and operation is mandatory. All AI output impacting clients, partners, or the public, must undergo human review to ensure appropriateness and to avoid AI system hallucinations.
Fairness and Non-discrimination: AI applications must be assessed for bias. We actively work to identify and mitigate bias in our AI systems, ensuring fair and impartial outcomes for all stakeholders.
Privacy and Data Protection: All AI systems must strictly adhere to the General Data Protection Regulation (GDPR), the EU AI Act, and LOW Associates’ internal data governance protocols. This includes implementing data minimisation practices, ensuring informed consent where required, and using anonymisation or pseudonymisation techniques to safeguard individual identities. AI systems must not collect or process personal data beyond what is necessary for their intended function.
Security: AI tools and data must be protected through a comprehensive framework of technical and organisational measures. This includes robust access controls, encryption protocols, activity monitoring, and incident detection systems. Security practices must be reviewed regularly, and AI systems should undergo penetration testing and resilience audits to mitigate the risk of breaches, manipulation, or misuse by internal or external actors.
Principles
The primary AI systems used by LOW Associates are Large Language Models (LLMs). Our current list of approved AI systems is as follows:
ChatGPT and custom OpenAI GPTs; as well as their integrations in third-party applications (such as Airtable, Make and Zapier)
Microsoft Copilot
Perplexity
Claude
Otter.ai
ElevenLabs
Google Gemini and NotebookLM
Approved use cases include but are not limited to:
Content drafting and editing assistance
Translation and language support
Streamlining workflows and Process automation (e.g., scheduling, summarising)
Providing data-driven insights
Prohibited use includes:
Autonomous decision-making that affects employment or contractual decisions
Deployment of AI in politically sensitive or high-risk contexts without senior and/or client sign-off
Use cases
Clients, partners, or the public must be notified when AI is being used in a way that affects their interaction with LOW Associates. Explicit consent should be obtained for high-sensitivity applications, and communication must be clear, accessible, and culturally appropriate.
Stakeholder Communication and consent
Any unintended consequences, errors, or harms resulting from AI system use must be reported immediately. A structured process for investigation, remediation, and, where necessary, stakeholder notification will be in place.
Incident Response and Redress
Any unintended consequences, errors, or harms resulting from AI system use must be reported immediately. A structured process for investigation, remediation, and, where necessary, stakeholder notification will be in place.
Incident Response and Redress
We regularly review and update our AI practices and policies to reflect technological advancements, regulatory changes, and feedback from clients and employees.
LOW Associates encourages responsible experimentation with emerging AI technologies. Sandboxed environments are used for testing, and lessons learned must feed back into the policy, training, and implementation playbooks to ensure continuous alignment with best practices and evolving norms.
This policy will be reviewed annually or in response to significant legal, technological, or operational changes.