Responsible AI Usage Policy

Artificial Intelligence (AI) is an increasingly important tool in the marketing landscape, offering unprecedented possibilities for content creation and customer engagement. However, with great power comes great responsibility. Therefore, this AI usage policy is designed to guide our team in the responsible, transparent, and ethical use of AI in their work. The aim of this policy is not to hinder creativity or innovation, but rather to ensure that our use of AI aligns with our overall corporate values and respects our customers' rights.

Guidelines for Responsible AI Usage

Transparency

It's crucial that we remain transparent about our use of AI. This includes acknowledging when AI has been used to create or modify content. This can be through a blanket statement on our website, listed author, or integrated into contracts with clients.

Transparency Statement:

We use AI to assist in some content development at our company. To ensure transparency, accountability, quality and privacy, we adhere to internal AI usage standards. These standards help us safeguard against biases, maintain data security,and uphold our commitment to ethical marketing practices. One of these standards isthat AI should be used to assist in content creation, not fully automate it. We ensure that every piece of content we develop is shaped and reviewed by people who have an understanding of our audience and AI’s limitations.

Tool Selection

The following AI tools have been approved for use in our company. DO NOT use any tools outside of those on this list or approved in writing by our security team on company devices or to do company related work.

  • Jasper, the AI copilot for marketing teams that created this template has best-in-class security and privacy policies including SOC2 compliance, SSO and U.S. data storage.
  • ChatGPT , a chatbot that uses natural language processing to simulate conversations with customers and assist with customer support inquiries.
  • DALL-E 3, an AI tool that generates 3D images based on text descriptions, perfect for creating product mockups and marketing materials.
  • Whisper, an AI-powered email assistant that helps to compose and send emails, schedule meetings, and manage tasks efficiently.
  • Anthropic , a machine learning platform that enables businesses to build and deploy custom AI models for their specific needs.

Accountability

Responsibility cannot be outsourced to a machine. Always remember that humans are ultimately accountable for the actions of the AI. AI is an assistant, not a replacement for good judgment. Our company policy is that we should NEVER publish or send something that has been written entirely by AI without human development or review for quality and accuracy. Additionally, in case of any negative outcomes from AI-assisted content, we must take responsibility and remediate as necessary.

Use Cases That Should Not Leverage AI

While there are many positive use cases of AI assistance in our work, there are specific types of work in which we have decided as a company to restrict the use of AI. Do not use AI for the following:

  • Published Content without being reviewed by a human
  • Legal Documents

Addressing Specific Issues

Bias

AI systems learn from the data they are fed, and thus can unintentionally perpetuate biases found in their training material. Many language models have filters to reduce the risk of bias or harmful outputs, but filters aren’t enough. It is our responsibility to ensure that content we produce is reviewed for potential bias and developed to be inclusive and accessible.

Privacy

We must protect the privacy of our customers. See our list of approved tools with reliable privacy policies and do not submit customer data into AI tools or LLMs. In addition, we must protect the privacy of our own intellectual property (IP). Sticking with the approved list of tools above will help safeguard both and ensure our data and IP is not used to train publicly accessible language models.

Security

AI systems can be targets for cyber-attacks. Please review the approved list of AI tools and discuss any additional tools you subscribe to or use on company devices with the security team.

Ethical Considerations

AI should not be used to mislead or manipulate customers. All content created using AI should be ethical and in line with our corporate values. AI content should go through a review process to check for bias, inaccuracies and other risks.

Impersonation

It is our company policy that employees should not use AI to impersonate any person without their expressed permission. AI can allow you to create “in the style” of public figures; as a policy we do not do that in our company. Designated employees may, with permission and review, use AI to mimic the writing style of a current Delegate2Tech employee for the purposes of ghostwriting or editing content from that individual.

Training Employees on AI Usage

All employees involved in creating content with AI should receive appropriate training. This should cover both the technical aspects of using AI, and the ethical considerations outlined in this policy.

Best Practices for Implementation

To practically implement this policy, always follow these steps:

  1. Understand the AI system you're using, including how it works and its potential limitations.
  2. Ensure that every new hire and existing employee you manage has read this policy.
  3. For specific tools, document or use materials from the company to document its functionality, limitations, and our company standards for using the technology.
  4. Continually update your knowledge and training as AI technology evolves.

Acceptance

By using AI in your work, you agree to comply with this policy. Non-compliance will be taken seriously and could lead to disciplinary action or employment termination.

Remember, the goal of this policy is not to restrict creativity, but to ensure that we use AI responsibly and ethically. By following these guidelines, we can harness the power of AI while respecting our customers and upholding our company values