AI Transparency and Responsible Use Policy
Introduction
At MUFG Investor Services (‘we’ ‘us’ ‘our’) we are committed to using artificial intelligence (“AI”) tools (“AI Tools”) responsibly and being transparent about what we do to build trust and confidence with our clients, employees, and other stakeholders. This AI Transparency and Responsible Use Policy (“Policy”) sets out how we ensure we use AI Tools responsibly, what we consider to be acceptable and unacceptable uses of AI Tools and the steps we take to safeguard individuals whose data are processed by these AI Tools.
What do we mean by AI Tools?
When we talk about AI Tools, we mean AI Systems and General Purpose AI Systems.
“AI Systems” are machine-based systems designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infer, from the input they receive, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
“General Purpose AI Systems” are AI Systems based on General Purpose AI Models which have the capability to serve a variety of purposes, both for direct use as well as for integration into other AI Systems.
A “General Purpose AI Model” (also called a foundation model) means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market. We use third party AI services from providers who make pre-trained AI foundation models available to us and we also partner with vendors who use AI Tools when providing their services to us.we talk about AI Tools, we mean AI Systems and General Purpose AI Systems.
“AI Systems” are machine-based systems designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infer, from the input they receive, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
“General Purpose AI Systems” are AI Systems based on General Purpose AI Models which have the capability to serve a variety of purposes, both for direct use as well as for integration into other AI Systems.
A “General Purpose AI Model” (also called a foundation model) means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.
We use third party AI services from providers who make pre-trained AI foundation models available to us and we also partner with vendors who use AI Tools when providing their services to us.
How We Use AI Tools
We use AI Tools to enhance our existing processes and to deliver operational efficiencies for ourselves and for our clients. This includes automating existing manual processes and integrating AI Tools into workflows and business operations and processes. Our AI Tools are designed for specific functions which include:
- Work Productivity – to assist our employees in day-to-day tasks by summarising internal documents and searching for internal information.
- Data Classification and Extraction – to assist in reading and extracting data from files and classifying that data according to our classification rules.
- Knowledge Management – to assist our employees with information and responses to routine queries relating to our processes and procedures.
- Software Development – to assist in programming, analysing and generating code, detecting defects and supporting product development.
- Security – to assist in detecting anomalies, suspicious activities, malicious behaviour, potential threats and in assessing supply chain risk.
Our AI Principles
Our AI Principles guide our use of AI Tools with each principle guiding our compliance steps and what is and what is not an acceptable use of our AI Tools.
1. Human Agency and Oversight
Our use of AI Tools supports human centricity and supplements but does not supplant the role of humans in our processes.
We ensure that assigned individuals review the output of the AI Tool as well as other information available to them and then act based on this (the output, their review of the output and additional information). This is often referred to as having a ‘human in the loop’.
We do not engage in automated decision making. Automated decision-making means that a decision about an individual which has a legal or other similarly significant effect is made automatically on the basis of a computer determination, using software algorithms, without any human review. A decision producing a legal effect is something that affects a person’s legal status or their legal rights. A decision that has a similarly significant effect is something that has an equivalent impact on an individual’s circumstances, behaviour or choices for example automatic refusal of an online credit application; or e-recruiting practices without human intervention.
2. Fairness and Transparency
Our use of AI Tools is transparent and fair.
We have documented in this Policy how we use AI Tools, and we will regularly review and update this Policy to ensure it continues to reflect our approach to and use of AI Tools taking into account any changes in legal, regulatory or contractual obligations and technology.
As and when required by law or regulation, we make clear to users when they are directly interacting with our AI Tools, and we are transparent about the capabilities and limitations of our AI Tools, including notifying users that the output of our AI Tools is not guaranteed to be accurate.
As and when required by law or regulation, we conduct assessments of proposed use cases (how we intend to use the AI Tool for a specified purpose) of our AI Tools before authorising that use case to ensure we have documented a detailed understanding of how the AI Tool will be used so we can trace and explain how the AI Tool supports that use case.
We are committed to ensuring that our use of AI Tools does not introduce any risk of bias or discrimination for any individual and that all individuals continue to be treated fairly by MUFG. We do this by carefully considering our use cases and restricting those use cases that do not result in any predictions or decisions being made about individuals or how resources or services are allocated to them.
3. Data Protection, Privacy and Confidentiality
We are committed to using AI Tools in a manner that respects and safeguards individual data protection and privacy rights and complies with our data protection obligations and commitments.
Our use of AI Tools involves the processing of ‘MUFG Personal Data’ (personal data relating to or identifying employees and directors of our clients as well as client-related parties, such as investment managers, investors into a fund that we administer, beneficiaries of our clients and trusts, employees, directors and major shareholders (>25%) of third-party providers). Our AI Tools may also process data that is categorized as “Confidential Information” under the terms of service agreements with our clients (“Client Confidential Data”). The type of personal data we process, why we process this personal data and the lawful basis for our processing is set out in our Data Protection Notices here and in our data processing agreements with clients.
While we take reasonable steps intended to ensure that processing is limited to only what is necessary, our AI Tools may, for example, process an entire document which contains MUFG Personal Data or Client Confidential Data in order to extract relevant information from that document for a particular use case.
We take reasonable steps intended to ensure that our AI Tools are data protection compliant and safeguard the rights of individuals whose personal data are processed by the AI Tools. This includes not only complying with our obligations under data protection laws but also our commitments to our clients as set out in our data processing agreements and to individuals whose data we process as set out in our Data Protection Notices here.
We also take reasonable steps intended to ensure appropriate data protection agreements which comply with our data protection obligations and commitments are in place with third parties whose AI services we use which include prohibitions or limitations on the use of MUFG Personal Data or Client Confidential Data to train any model and prohibitions or limitations on the storage or use of any MUFG inputs or outputs by the third party.
We do not use MUFG Personal Data or Client Confidential Data to fine tune or customise any model, however, we may do so in the future if we deem it necessary to support and enhance our operations and the services we provide to our clients.
4. Safety and Security
We take reasonable steps intended ensure that our use of AI Tools is safe and secure.
We take reasonable steps intended ensure that the AI Tools we use are used in a manner that is reliable and secure, by subjecting them to rigorous testing and deployment management processes so that they work as expected before using them and as we use them.
We take reasonable steps intended to enhance the accuracy and reliability of the outputs of AI Tools including prompt engineering (context provided in the prompts developed by our teams ensure that the AI Tool receives specific instructions on how to process the data and will only perform the task it is instructed to complete) and human review where our assigned employees consider the output of the AI Tool, as well as other information available to them, and then act based on this. Human in the loop feedback is also used to refine prompts and improve the accuracy of the output of our AI Tools.
We take reasonable steps intended to ensure a cautious and prudent approach to the deployment of AI Tools, including using AI Tools for authorised, permitted and/or registered use cases only that support and enhance our operations and the services we provide to our clients and that the AI Tools we use are suitable for those use cases.
We comply with legal instructions for, and conditions of use applicable to, third party AI services.
5. Accountability
We take reasonable steps intended to ensure that our use of AI Tools is lawful, ethical and responsible and we are accountable for our AI Tools and their outcomes.
We conduct assessments to ensure that any proposed use case complies with our existing commitments to our clients and our contractual and legal obligations before authorising that use case.
We will never use our AI Tools to impersonate or misrepresent any person or entity, to analyze anyone’s behaviour or to deceive or mislead anyone.
We will never use our AI Tools to create, distribute or promote any content that is unlawful, offensive, obscene, defamatory, or discriminatory.
We will never use our AI Tools for any use case where use or misuse could result in physical or psychological harm or injury to any individual.
We will never use our AI Tools for any use case where use or misuse could have a consequential impact on life opportunities or legal status, for example, legal rights, access to credit, education, employment, healthcare, housing, insurance, social welfare benefits, services, opportunities or the terms on which they are provided.
We will never use our AI Tools to incite violence or hateful behaviour, engage in fraudulent, abusive or predatory practices or to spread misinformation.
We will never use our AI Tools to assign scores or ratings to individuals based on an assessment of their trustworthiness or social behaviour.
We will never use our AI Tools to build or support emotional recognition systems or techniques that are used to infer people’s emotions.
We will never use our AI Tools, including engaging in untargeted scraping of facial images for facial recognition, to categorise people based on their biometric data to infer their race, political opinions, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
We will never use our AI Tools to exploit vulnerabilities (such as personality traits, social or economic situation, age, physical or mental ability).
We will never use our AI Tools to engage in subliminal, deceptive or manipulative techniques designed to manipulate behaviour or circumvent free will.