,

Who’s Watching AI?

It seems like every time we turn around, another AI tool has been created and marketed. AI tools can conduct research and deliver information within seconds, and as responsible stewards for preserving the integrity of government agencies, we must ask ourselves a critical question: Is Big Brother watching AI, and if so, how can we ensure that the watchful eye is inclusive rather than exclusive?

Artificial Intelligence has become an indispensable tool in the realm of government policy and information sharing. It has the power to revolutionize the way we gather, process, and disseminate information. But with great power comes great responsibility, especially when it comes to ensuring that the AI-generated content we share is inclusive and considerate of neurodiverse cognitive processing styles and personal backgrounds.

Challenge: Is AI Inclusive?

How are AI sources vetted for inclusivity? AI systems are only as good as the data they’re trained on, and if that data is biased or exclusive, the results will be too. This makes the vetting process crucial. For government employees who are responsible for policy, it is critical to ensure that the sources feeding AI are diverse and well-rounded, reflecting the rich tapestry of human experience.

Some effective strategies include:

  • Database Diversity: Prioritize the use of databases that are known for their inclusivity and diversity.
  • Regular Audits: Implement regular audits of AI tools to ensure they are drawing from a wide range of perspectives and sources.
  • Cross-Departmental Collaboration: Encourage collaboration between departments to share diverse sources and promote an inclusive approach.

Ultimately, It Falls Upon You

AI can generate content all day long, and you can easily turn around to share it, but I caution you against fast-tracking the sharing of the information as it has been generated. It still takes humans to analyze the results for targeted purposes, and to ensure inclusivity. It’s not enough to use known diverse sources. You must still pay close attention to the ways in which information is presented to ensure accurate processing by those with neurodivergent cognitive abilities.

Here is one very simple example for illustration: Hashtags (#) are used in social media posts. Social media users will often choose to follow certain hashtags. One common technology hashtag is #riskmanagement. Notice that this hashtag appears as two words smashed together to appear as one term. To those with dyslexia or neurodivergent visual processing challenges, these hashtags can appear jumbled or unreadable, and screen readers also have trouble with them. Always start words with capital letters to make the hashtag easier to read and understand. #RiskManagement. When reviewing and ultimately presenting your AI generated content, consider the following:

  • Multimodal Presentation: Use a variety of formats, such as text, audio, and video, to cater to different learning preferences.
  • Simplified Language: Avoid jargon and use clear, concise language to make information accessible to all.
  • Feedback Mechanisms: Establish channels for feedback on the inclusivity of information, allowing continuous improvement.

Government Resources on AI Ethics and Inclusivity

The U.S. government has recognized the importance of AI ethics and inclusivity, providing resources to guide policy employees. These include guidelines on ethical AI use, training programs on bias mitigation, and tools for inclusive communication.

  1. The National AI Initiative provides frameworks for ethical AI research and deployment.
  2. National Institute of Standards and Technology (NIST) AI Risk Management Framework
  3. The Office of Science and Technology Policy (OSTP) offers resources on AI ethics and inclusivity.
  4. GSA’s Technology Transformation Services provides training on inclusivity in digital services.

Here’s an interesting article about inclusivity considerations within the creation of new AI tools and platforms: https://ssir.org/articles/entry/inclusive-generative-artificial-intelligence

It’s Up to All of Us — Here’s What We Can Do

Ensuring AI inclusivity is not just a task — it’s a mission. New resources are popping up just about every day that serve as guides in our creation of and usage of AI tools. here are some ways in which you may stay informed and hear from authorities who are pioneers in this arena:

  1. Explore the available U.S. government resources.
  2. Share these resources within your agency to promote inclusive workplaces for neurodivergent individuals.
  3. Review all AI-generated content carefully to ensure it meets inclusivity standards.

It truly is up to all of us to ensure that AI serves all people. Inclusivity must be built into all AI tools. AI content must be closely reviewed for inclusivity. The more vigilant we are with our attention to this aspect of AI, the more equitable and comprehensible our digital world is for everyone.


The multi-faceted nature of Susan Powell’s professional background paints the picture of a lifetime learner who has always taken full control of her career path and decisions to apply her learning experiences in the most productive ways possible. Susan has brought her passion for writing and communications to every career upgrade and role, which helped her to secure the Marketing Director position for a cybersecurity company that she holds today. Fueled by her continued enthusiasm for earning applicable certifications, she continues to develop her marketing prowess and channel partner marketing skills. This former elementary teacher-turned-marketer is still a happy work-in-progress.

AI-generated image by Susan Powell

Leave a Comment

Leave a comment

Leave a Reply