- Purpose and PrinciplesRed Brand Media supports the ethical and creative use of Artificial Intelligence (AI) tools to enhance journalistic productivity, storytelling, and analysis. This policy ensures all AI usage across RBM publications aligns with our values of accuracy, transparency, privacy, and fairness.
- Editorial Oversight and Human Review
- AI tools may assist with tasks such as summarisation, research support, design generation, and content drafting, but human editorial review is required before any AI-assisted content is published.
- Editors must ensure the material produced by AI is factually accurate, contextually appropriate, and upholds our editorial standards.
- Attribution and Transparency
- Content created or significantly assisted by AI will be clearly labelled or acknowledged. Examples include use of an AI byline or icon, or a short note at the end of an article.
- The nature and role of AI in editorial decision-making must be transparent and proportionate, particularly where it affects the substance of a story.
- Accuracy and Misinformation Prevention
- AI outputs must never be published unvetted. Journalists and editors must fact-check AI-generated content and guard against hallucinations, misinformation, or omission.
- AI should not be used to create photorealistic visuals, videos or speech to depict real people or events unless appropriately disclosed and reviewed.
- Privacy and Confidentiality
- Staff may not input confidential, sensitive, or proprietary information into third-party AI platforms.
- Transcription or data analysis tools must adhere to data protection standards and never compromise private sources or protected content.
- Copyright and Plagiarism
- AI-generated material must not knowingly infringe the rights of writers, artists, photographers, or other content creators.
- Staff should identify and attribute original sources, particularly when styles or themes of known creators are mimicked.
- Non-Discrimination
- AI content must be checked for biased or prejudicial language and evaluated for fairness and representation. Human editors are responsible for ensuring inclusive, respectful communication.
- Internal Governance
- Editors will maintain a register of AI tools in use and staff experiments that may inform future workflows.
- Training and guidelines on AI ethics, prompting, and evaluation will be provided to relevant staff.
- Freelance and Third-Party Disclosure
- Freelancers must declare any use of AI tools in submitted content.
- Third-party materials sourced via platforms or AI tools must undergo standard editorial checks.
- Updates and Oversight
This policy will be reviewed annually by the Editorial Board, or sooner in response to material developments in AI technology or regulation.