Skip to content

Artificial Intelligence (AI) Usage Policy for Employees

This policy applies to everyone employed directly by the Diocese of York. It does not formally cover incumbents, PCCs, or anyone employed by a PCC.

That said, we know many parishes are already using AI tools, or considering them. To help, we have also produced Guidance for PCCs, which can be downloaded below. You are welcome to copy and adapt wording from the diocesan policy and guidance to suit your local context.

To support parishes further, we will run a live webinar on 9th March at 4.00pm. Registration details will be shared here. After the event, this page will be updated with a recording link.

Because AI is developing quickly, this page will include an Updates section at the top. Any small changes or clarifications to the policy will be posted there. The Diocesan Board of Finance will review the policy annually.

With thanks to, and acknowledgement of, the work of NCI, Truro and Exeter Dioceses in the formation of this policy.

  1. Policy statement
  2. Who is covered by this policy?
  3. Definition of Artificial Intelligence tools
  4. Approved Artificial Intelligence tools
  5. Review of tools and applications for tools to be included in the list of permitted tools
  6. Data security and confidentiality
  7. Appropriate use guidelines
  8. Quality control and human oversight
  9. Training and support
  10. Monitoring and compliance
  11. Breaches of this policy
  12. Guidance and resources

1. Policy Statement

Artificial Intelligence (AI) tools are increasingly being used in workplace settings and can offer significant benefits in terms of efficiency and productivity, whilst having the potential to enhance our effectiveness in serving our community. This policy outlines York Diocese Board of Finance’s (YDBF) approach to the use of AI tools in the workplace, ensuring their responsible and secure use while protecting the organisation’s interests, maintaining data security, and upholding our values and reputation.

As an organisation, we recognise that there are both opportunities and responsibilities that come with AI use. While we embrace these technological advances, we must ensure that their deployment aligns with our values, legal obligations, ethical considerations and duty of care to all stakeholders.

Five Principles for ethical concerns from a biblical perspective:

(taken from The Church of England Ethical Investment Advisory Group (EIAG) Advice – https://www.churchofengland.org/sites/default/files/2025-01/eiag-artificial-intelligence-advice-2024.pdf)

Flourishing as persons 

  • That it does not undermine inherent human dignity  
  • That it does not lead to a devaluation of good work and of human skill, expertise and creativity

Flourishing in relationship  

  • That it does not lead to a devaluation of existing human relationships (i.e. by getting too used to the attenuated relationships afforded by robots)  
  • That it encourages honesty and authenticity (e.g. AI is clearly labelled as such to minimise wilful or inadvertent deception)  

Standing with the marginalised  

  • That it does not exacerbate societal prejudice (e.g. by training AI algorithms with biased data sets)  

Caring for the creation  

  • That it is not wasteful of resources (e.g. needlessly encouraging single-use or constant upgrade to new product models)  
  • That it does not add significantly or needlessly to emissions (e.g. through the need for more energy-consuming data servers)  

Serving the common good  

  • That it does not lead to greater inequality through poor distribution of the benefits of AI 

This policy acknowledges the beneficial use of AI tools while establishing clear boundaries for their appropriate application within YDBF. It connects with our Data Protection and Information Security policies, as we have a duty to protect confidential information and personal data. AI usage can present risks to data security, intellectual property rights, accuracy of outputs, and the maintenance of human oversight in important decision making. The policy aims to strike a balance between the benefits of AI and the need for responsible use, data security, and human oversight. It provides clear guidelines on approved tools, appropriate use, and the consequences of policy breaches. In the event that this policy and the law conflict, the law shall take precedence. If employees are in any doubt as to what their rights are they are to discuss matters with their manager. If this policy changes because of amendments in the law, the changes will be notified to the employee via their manager.  This policy does not form part of your contract of employment, and it may be amended at any time

2. Who is covered by the policy?

This policy is intended to apply to all employees of the Diocese of York (hereafter referred to as YDBF), including full-time, part-time and fixed term employees and home workers. In addition, it is intended that casual and agency staff and volunteers who use AI tools in connection with their work for YDBF shall abide by this policy. In such cases,
the individuals will be made aware of this policy by their official supervisor, along with our Policy for the use of electronic information and communications systems. Those covered by the policy are henceforth referred to as ‘employees’ for the purposes of the policy.

All employees are expected to adhere to the policy guidance in relation to AI, and we expect all employees to exercise sound judgment in their use of AI tools. If in any doubt about their use, employees should contact their line manager/Director of Operational Support. Employees should take appropriate precautions to protect YDBF’s electronic communications systems equipment from unauthorised access and harm at all times. Failure to do so may be dealt with under the Disciplinary Procedure and, in serious cases, may be treated as gross misconduct leading to summary dismissal. As AI technology continues to evolve, this policy will be reviewed and updated accordingly. Changes will be communicated to employees through their managers.

3. Definition of Artificial Intelligence Tools

For the purposes of this policy, AI tools (also termed Generative AI) include but are not limited to:

  • Large Language Models (such as ChatGPT, Claude, or Bard)
  • AI-powered productivity tools
  • AI image generation tools
  • AI transcription services
  • AI-powered analysis tools
  • AI writing assistance tools

Generative AI (GenAI) is technology that synthesises new text, audio, or imagery from bodies of data in response to user prompts. GenAI models can be used in public AI tools, such as ChatGPT or within applications such as Adobe applications.

Particular awareness is needed around the preservation of human influence in the content being generated and intellectual property rights.

Analytical AI refers to systems that use data to identify patterns, make predictions, or support decision-making. It does not create new content but helps analyse information—such as trends in giving, attendance, or building use—to inform planning and strategy.

Particular awareness is needed around the uploading or farming of sensitive data and the level of decision-making being taken by AI as opposed to a human being.

Artificial Intelligence (AI) and automation are not the same, but they often work together and can be easily confused for one another. AI helps machines to learn, reason, and make decisions, whereas automation uses predefined rules to perform tasks without human intervention, such as setting a reminder or turning on the lights at a certain time.

4. Approved Artificial Intelligence Tools

Employees must only use AI tools that have been approved by the Director of Operational Support (Deputy Diocesan Secretary) for work-related tasks, or on YDBF-owned devices. This includes generating reports, drafting emails, or any other professional communication.

The use of AI tools must be:

  • Through official corporate accounts, where applicable
  • In compliance with all licensing and usage agreements
  • For legitimate business purposes
  • In accordance with data protection regulations
  • Within the scope of the employee’s role and responsibilities

Personal accounts for AI services must not be used for YDBF work purposes unless specifically authorised by the Director of Operational Support.

Permitted tools as of November 2025:

  1. Microsoft 365 Copilot
    • This can be accessed by browsing to https://copilot.microsoft.com and following the on-screen prompts. Anything searched or queried within this portal is stored solely within the YDBFs Microsoft environment.
    • To be used for text content generation, research assistance, and data analysis.
    • We suggest using the Microsoft 365 CoPilot training materials on the Microsoft website if you need guidance.
    • Microsoft Teams transcription and recap function as long as using your diocesan email address (as data is stored within our security parameters).
    • Legal and safeguarding advice should not be sought from AI.
  2. Zoom
    • Only record meetings on specific team accounts, carefully considering who else has access to these. Individual accounts for confidential meetings can be set up if needed.
  3. Canva AI
    • To be used by approved Canva account holders within the ethical framework of this policy
  4. Other AI tools such as Chat GPT, Gemini AI (Google), Claude or Perplexity only to make comparisons of responses, to check for accuracy or anomalies of queries that are of a general nature: this means use of other AI tools must not involve the uploading of any diocesan specific data or other identifiable information, or replace professional, legal or safeguarding advice.

5. Review of tools and applications for tools to be included in the list of permitted tools

Employees wishing to use tools in addition to those on the permitted list must make an application to the Director of Operations Support for review. Employees should make a case outlining the risk/benefit balance as they see it. Applications will be considered using the following matrix:

Low autonomy/high data sensitivity
Where amounts of data entry are small or not sensitive, but AI tools will be used autonomously and/or in decision making, applications will be considered.
High autonomy/high data sensitivity
Where it is proposed that large amounts and/or sensitive data are to be uploaded, but AI tools are to be used to inform and assist human decision-making, applications will be considered.
Low autonomy/low data sensitivity
Where data entry is small, publicly available or of a non-sensitive nature and information generated will be used to inform human decision making, applications are likely to be permitted.
High autonomy/Low data sensitivity
Where it is proposed that large amounts and/or sensitive data are to be uploaded, but AI tools are to be used to inform and assist human decision making, applications will be considered.

6. Data Security and Confidentiality

When using AI tools, employees must:

  • Never input confidential information, personal data, or sensitive organisational information into external AI tools unless certain that it is secure (use of Microsoft). This includes the content of meetings that include sensitive data
  • Ensure any data used is appropriately anonymised and aggregated
  • Only permit the minimum necessary permissions for access to other systems, such as Outlook diaries and OneDrive. Ask for support if not sure how to manage this
  • Be aware that information entered into external AI tools may be retained by the service provider
  • Only use approved internal AI tools for processing sensitive or confidential information
  • Report any potential data breaches immediately to the Head of Governance and the Director of Operational Support.

7. Appropriate Use Guidelines

Employees must review AI-generated materials for accuracy before publishing or sharing. Not everything which is created by AI can be guaranteed to be accurate. Employees must consider the ethical implications of AI use, ensuring fairness, transparency, and accountability (see below) in all AI-assisted work. Efforts should be made to identify and mitigate biases in AI-generated content.

The carbon footprint associated with the use of AI is considerable, as the training of AI models is energy-intensive and produces high carbon emissions. Employees should balance the advantages of using AI against the environmental impacts of doing so.

Search engines organise and prioritise results using algorithms which take into account factors like your search history, location, and personal preferences. Because these algorithms are shaped by human behaviour and societal norms, they can unintentionally reinforce stereotypes or biases present in the wider world. When AI tools use this data, they may reflect these same biases in their outputs. Being aware of this helps us use AI more thoughtfully and responsibly.

Where there are perceived gaps in information, AI may add incorrect data, which could lack relevant context; this is called hallucination. For example, an AI tool might generate a fictitious event or incorrect data that seems plausible. It is vital that any AI-generated content is carefully checked for accuracy before use. AI must not be used instead of proper legal, safeguarding or other professional advice.

Where AI is being used to process data, a GDPR risk assessment must be undertaken.

When using AI to support their work, employees should do their utmost to:

Fairness:

  • Consider the ethical implications of AI use in their work
  • Ensure that AI applications do not discriminate against any group or individual

Transparency:

  • Do not rely solely on AI applications when making critical decisions
  • Clearly document the use of AI applications in significant work

Accountability:

  • Verify all AI-generated content before use
  • Use AI applications as assistance rather than a replacement for human judgement
  • Maintain human oversight and responsibility for AI-assisted decisions

Sustainability:

  • Do not use AI unnecessarily in order to limit environmental impacts. To use AI more sustainably:
    • Write clear, precise prompts
    • Cut the filler (save the small talk)
    • Generate only when necessary
    • Avoid peak-hour usage (data suggests that between around 10am-3pm there is less electricity demand and therefore more renewable sources being used, making this the most optimal time to use AI)

8. Quality Control and Human Oversight

All work produced using AI tools must be:

  • Reviewed for accuracy and appropriateness
  • Checked for potential biases or errors
  • Verified against YDBF’s standards and values
  • Subject to the same quality control processes as non-AI-assisted work
  • Properly attributed when required by policy or law

9. Training and Support

YDBF will provide training on the responsible use of AI tools. Employees are required to attend these sessions and stay informed about the latest AI developments and policy updates.

Training will include:

  • Effective Use: Best practices for using AI tools efficiently and responsibly.
  • Risk Awareness: Understanding potential risks and limitations of AI tools.
  • Policy Updates: Keeping up-to-date with changes in AI policy and best
    practices.

10. Monitoring and Compliance

YDBF will:

  • Monitor the use of AI tools on YDBF systems and devices (with advice from the AI working group)
  • Audit AI tool usage for compliance with this policy
  • Review AI-generated content used in YDBF work
  • Restrict or block access to AI tools where necessary

11. Breaches of this Policy

Breaches of this policy may result in:

  • Disciplinary action under the Disciplinary Procedure
  • Restriction or removal of access to AI tools
  • Additional training requirements
  • Review of work processes and procedures.

Serious breaches, particularly those involving confidential information or data protection, may be treated as gross misconduct.

12. Guidance and resources

AI Policy Guidance for PCCs

.docx / 43 KB

As registered charities, Parochial Church Councils (PCCs) are subject to the same legal and ethical responsibilities as other charitable organisations. This includes ensuring that any use of emerging technologies—such as artificial intelligence (AI)—is consistent with their charitable purposes, complies with data protection and safeguarding requirements, and upholds public trust.