We prompt generative AI to provide answers on streamlining and improving code while understanding that it is only as useful as the prompts we give it and the oversight we conduct on it.
We use boilerplate code when we experiment with AI, to guard against data leakage of any kind. We protect client information by never pasting proprietary code or data into generative AI tools, such as CoPilot, ChatGPT or Bard.
We have established processes for auditing code for vulnerabilities and performing a QA review to prevent code that is unsecure, noncompliant or inaccessible from being published.
We are proponents of RAI (Responsible Artificial Intelligence) and seek to work only with agency partners who use copy and imagery free from copyright issues that generative AI can present.
We also support Constitutional AI initiatives to prevent the proliferation of biased, discriminatory, and antidemocratic information.
As new insights about AI’s impact come to light, and new solutions are presented, we will continue to revisit our AI Code of Conduct continually, to ensure we have guardrails that uphold the highest standards of code and data security, and human and societal well being.