AI as a Growing Child: How We Can Shape Its Future Responsibly
Article by Ron Guerrier
Note from SustainableIT.org: Ron Guerrier will be attending The World Economic Forum in January as part of our organization’s IT delegation. He is also leading SustainableIT’s Responsible and Sustainable AI Working Group team focused on governance of data integrity. He will be joining the nonprofit’s global board in 2025.
Artificial intelligence (AI) is no longer the stuff of science fiction; it’s here, influencing everything from healthcare to hiring practices. Tools like ChatGPT have democratized access to AI, allowing individuals and organizations to harness its potential in ways previously unimaginable. But as with any transformative technology, AI comes with risks—chief among them, the perpetuation of biases and systemic inequities. To guide AI’s development responsibly, we need to think of it not just as a tool, but as a growing child—shaped by its environment, for better or worse.
This analogy might seem odd, but it’s instructive. AI, like a child, learns from those around it. Developers, engineers, policymakers, and even end-users all contribute to its growth. To ensure AI evolves into a force for good, rather than a perpetuator of harm, we must address the societal and systemic factors that shape it. One framework that helps us understand this dynamic is Dr. Urie Bronfenbrenner’s Ecological Systems Theory, originally designed to examine human development. Applied to AI, it reveals the interconnected layers of influence that guide its growth and underscores the urgent need for responsible governance.
At the most immediate level is the microsystem—the developers, engineers, and users directly interacting with AI. These are the people who write algorithms, choose training data, and determine how AI systems operate. The problem is that these systems often reflect the biases of their creators. For example, when I asked an AI tool to enhance a photo of myself—a 50-year-old Haitian American Black man—it rendered an image of a younger white male with blue eyes. This wasn’t a fluke; it was the result of biased data and decision-making embedded in the system. Without diverse perspectives among developers, AI will continue to misrepresent and exclude marginalized communities.
Next is the mesosystem, which represents the relationships between key actors—tech companies, governments, and researchers. These groups determine how AI is deployed and regulated. If these relationships prioritize profit over fairness or innovation over inclusion, entire communities’ risk being excluded from the benefits of AI. Marginalized groups, particularly Black and Latino communities, already face systemic barriers to digital access. When AI is designed and governed without their input, those barriers become even harder to overcome.
The exosystem includes external forces like corporate policies, media narratives, and economic pressures. These forces often dictate industry priorities, and lately, the trend has been troubling. Many companies are scaling back Diversity, Equity, and Inclusion (DEI) initiatives, which play a crucial role in ensuring AI systems are designed with fairness in mind. Without these programs, there’s little accountability for whether AI serves everyone equitably—or just the privileged few.
The macrosystem reflects the broader cultural context—our collective values, norms, and beliefs. In the tech world, there’s often a disconnect between the pace of innovation and the values of equity and inclusion. For example, while AI development is accelerating, diversity in STEM fields remains stagnant. Black professionals make up just 8.6% of computer science graduates, and even fewer are represented in leadership roles (Deville, 2024). This lack of representation is more than a diversity issue; it’s a systemic failure that risks embedding biases into the very foundation of our technologies.
Finally, the chronosystem captures the influence of time—how historical events and technological milestones shape AI’s trajectory. The release of ChatGPT in 2022 marked a pivotal moment, making generative AI tools widely available to the public. While this democratization is exciting, it also comes with risks. Without ethical guardrails, these systems can amplify existing inequalities. Decisions made today—whether to regulate AI or let it develop unchecked—will have long-term consequences for society.
So, what can we do? First, we must recognize that AI doesn’t develop in a vacuum. It’s shaped by people, policies, and cultural norms. To ensure it grows responsibly, we need diverse voices at the table—developers, policymakers, and community leaders who can represent the needs of all users, not just the privileged few. Second, we need stronger governance frameworks that emphasize transparency, fairness, and accountability. This includes mandating bias testing, diversifying datasets, and holding companies accountable for the societal impacts of their technologies.
Finally, we need a cultural shift. AI should be seen not just as a technological achievement, but as a societal one. Its development must prioritize equity, inclusion, and responsibility. By doing so, we can harness AI’s potential to bridge gaps and create opportunities, rather than perpetuate harm.
The analogy of AI as a growing child reminds us of the stakes. Like raising a child, shaping AI’s future requires care, collaboration, and foresight. With the right guidance, AI can be a force for good—an innovation that uplifts rather than excludes. But if we neglect our responsibility, we risk creating a technology that mirrors and magnifies society’s worst flaws. The choice is ours to make.
The Author
Ron Guerrier is a distinguished technology leader with over 25 years of experience driving innovation and transformation across the public and private sectors. He currently serves as the Chief Technology Officer for Save the Children, where he leverages technology to enhance global programs benefiting underserved communities. Previously, Guerrier held notable positions as a Fortune 500 CIO at HP Inc., Farmers Insurance, and Toyota Financial Services, as well as Illinois Secretary of Innovation and Technology, leading groundbreaking digital initiatives at the state level. In 2022, he was inducted into the CIO Hall of Fame, recognizing his visionary leadership in the technology field. Guerrier is currently pursuing a doctorate degree from the University of Southern California, focusing on Digital Equity in the Age of Artificial Intelligence, further demonstrating his commitment to fostering technology that serves all communities equitably.
References
Bronfenbrenner, U. (1979). The ecology of human development. Harvard University Press.
Deville, K. (2024). STEM education statistics in 2024. STEM Education Guide. https://stemeducationguide.com/stem-education-statistics/