Artificial intelligence is playing an increasingly prominent role in shaping how we work, communicate, and make decisions. From automating tasks to helping draft emails, AI is already embedded in daily routines for millions of people. One of the most well-known tools in this landscape is ChatGPT, a conversational AI developed by OpenAI that has seen explosive growth in both popularity and capability. However, as this technology becomes more widespread, an important question has emerged: Who is using these tools?
A 2025 report by Appfigures revealed that 75 percent of ChatGPT’s mobile users are men. This statistic is not just a curiosity—it’s a warning sign. The gender disparity in generative AI usage suggests that these tools may not be serving all demographics equally. It raises questions about whose voices are shaping AI’s evolution and who is being left out of that conversation.
At a glance, it may seem like just another tech demographic skew. After all, men have historically adopted new digital platforms more quickly than women. But generative AI is different. Unlike social media platforms or gaming consoles, tools like ChatGPT are not simply for entertainment or communication. They are dynamic, adaptive technologies that learn from their users in real time. If a narrow slice of society is primarily engaging with them, the consequences could be far-reaching and deeply ingrained.
Why Gender Disparity in AI Usage Matters
The core functionality of generative AI like ChatGPT depends on vast amounts of data. Some of this data comes from curated training sets scraped from the internet, but increasingly, a significant portion is shaped by user interactions. These include prompts, corrections, feedback, ratings, and ongoing behavioral signals. As a result, real-world use has a formative influence on how the model behaves over time.
If most of that use is coming from men, then men’s perspectives, language, interests, and even unconscious biases are becoming overrepresented in the dataset that the model adapts to. The result is a feedback loop: AI becomes more attuned to male-dominant use cases, which makes it more useful and appealing to men, which in turn perpetuates their higher engagement. Meanwhile, groups that are underrepresented may find the tool less intuitive, less relevant, or subtly biased, leading to lower use and continued exclusion.
The implications of this loop are not just theoretical. Many sectors—education, healthcare, business, marketing, creative industries—are beginning to incorporate AI into their daily workflows. If the AI serving these sectors is skewed by an imbalanced user base, the tools could reinforce inequities already present in society. For example, if ChatGPT becomes more optimized for solving problems typical in male-dominated professions like software engineering, finance, or data analytics, it may become less capable in professions where women are the majority, such as teaching, nursing, or social work.
Moreover, this male-dominant feedback loop can influence how leadership, success, communication, or intelligence are represented in AI-generated content. Traits often stereotypically associated with male leadership—assertiveness, competition, independence—may be emphasized more than equally valuable traits like collaboration, empathy, or emotional intelligence. The result is not just a lack of balance, but a subtly distorted view of what is “normal,” “effective,” or “professional.”
The Influence of Cultural and Social Factors
Understanding why 75 percent of ChatGPT users are men requires examining broader cultural and social contexts. Technology adoption has long reflected existing gender dynamics, with men generally being early adopters of new tools and platforms, especially those tied to computing or programming. Tech culture itself remains male-dominated, with far more men employed in software development, engineering, and data science. This overrepresentation carries into how technology is developed, marketed, and perceived.
When AI tools like ChatGPT are framed primarily as productivity boosters or technical assistants—roles often associated with stereotypically male interests—they may be more readily embraced by men. Women, on the other hand, may see fewer immediate personal or professional applications, particularly if the interfaces or use cases don’t feel inclusive or tailored to their needs.
Another important factor is trust and familiarity. Research has consistently shown that women are more cautious than men when it comes to adopting new technology, particularly when it involves sharing personal data or relying on algorithms for decision-making. If women feel unsure about how the model works, how their data is being used, or how trustworthy the responses are, they may be less inclined to experiment with it in the first place. Early negative experiences—such as finding the responses unhelpful, feeling alienated by tone, or detecting subtle bias—can quickly reinforce the perception that the tool is “not for them.”
Educational access and encouragement also play a role. In many educational systems, girls are still less likely to be encouraged to pursue computer science, AI, or STEM-related fields. This gap in exposure can carry over into adulthood, where women may feel less confident experimenting with AI tools, even when they could be useful in their work or studies. The intimidation factor is real, and it’s one of many reasons why inclusivity needs to be built into the design and outreach strategies of AI platforms.
AI Tools Are Becoming Gatekeepers of Information and Opportunity
Another reason the gender imbalance in AI usage is so troubling is that tools like ChatGPT are increasingly becoming gatekeepers of information. Whether it’s writing a resume, preparing for a job interview, planning a project, or learning a new concept, people are turning to generative AI for support. In a world where search engines and social media once dominated the discovery of knowledge, generative AI is becoming the new interface between users and the world’s information.
This shift carries enormous implications for equity. If only a select group is shaping how AI interprets and presents knowledge, then other groups risk being marginalized not just socially or culturally, but also informationally. If fewer women use tools like ChatGPT, the AI may become less skilled at handling topics that matter to them, less knowledgeable about the challenges they face, and less accurate in answering their questions.
In practical terms, this could mean that queries about maternity leave rights, domestic violence resources, women’s health, gendered workplace dynamics, or balancing caregiving with career aspirations might receive less thoughtful or accurate responses than queries about venture capital funding or Python scripting. Even worse, when a user from an underrepresented group turns to AI for help and finds the answer confusing, irrelevant, or dismissive, they may walk away from the technology altogether, missing out on opportunities that others are capitalizing on.
In this way, AI is not just a neutral tool. It’s a cultural artifact that both reflects and reproduces the power dynamics of its users. If we don’t address gender imbalances now, we risk building digital systems that reinforce inequality rather than correct it.
Inclusion Is Not Just a Moral Imperative—It’s a Functional One
It’s tempting to frame digital inclusion as a matter of fairness, and it is. But inclusion is also a practical and technical necessity. AI performs best when trained on diverse, representative data. This means the more perspectives and use cases it can learn from, the more effective it becomes for everyone. Homogeneity isn’t just unfair—it’s inefficient.
Imagine a doctor’s diagnostic tool that’s only been trained on data from white men. It would fail at identifying symptoms that present differently in women or people of color. The same logic applies to ChatGPT. If its user data is overly male, then its generalizations will become subtly (or overtly) less accurate for other groups. This isn’t just a hypothetical. In many machine learning systems, skewed data has already led to measurable disparities in everything from facial recognition to credit scoring.
The point is this: Inclusion isn’t a bonus feature—it’s foundational to building better AI. And that means developers, researchers, and companies like OpenAI need to prioritize representation not only in training data but also in real-world user engagement. If we want AI to serve everyone, we have to make sure everyone is participating in its development.
Moving Toward Equitable AI Use
So, how do we fix this? Addressing the gender gap in AI usage requires a multi-pronged approach involving awareness, design, education, and outreach.
First, there must be more research into the causes and consequences of user disparities. A single statistic—75 percent of mobile users are men—is a start, but we need to dig deeper. What kinds of prompts are men versus women entering? What industries are they using ChatGPT in? What frustrations or barriers are preventing more women from using it?
Second, product design must evolve to be more inclusive. This doesn’t mean “feminizing” the interface but ensuring that the AI is trained and evaluated on a diverse range of inputs, tone preferences, and use cases. It means investing in prompt testing that considers gendered language, sensitivity to different communication styles, and representation across domains that matter to women.
Third, education and outreach campaigns can go a long way in making AI tools feel more accessible. Schools, universities, and community organizations should provide workshops, demos, and resources that demystify AI and show women how these tools can empower them in their careers and daily lives. Representation matters—both in marketing and in mentorship.
Finally, feedback mechanisms need to be taken seriously. If users report that responses are biased, unhelpful, or exclusionary, that feedback must be acted on, not ignored. Building inclusive AI means building systems that are accountable to the full range of human experience.
The Real-World Consequences of Gender Skew in AI Usage
The gender imbalance in generative AI usage isn’t just an abstract problem—it has real, tangible consequences in how people access knowledge, navigate opportunities, and make decisions in their lives. When certain voices dominate digital spaces, the tools and systems shaped by those voices inevitably start to reflect their priorities, blind spots, and biases. And in the case of AI, where the system’s “intelligence” is constantly evolving through user input, the stakes are even higher.
Let’s take a step back and look at some key areas where gendered usage patterns can impact outcomes: education, career development, health, and digital safety. These domains are already affected by existing gender inequities. If AI tools like ChatGPT reflect only a limited subset of human experience—overwhelmingly skewed toward male voices—they risk reinforcing these imbalances in ways that are subtle but deeply consequential.
Educational Inequities and Gendered Knowledge Representation
AI has emerged as a powerful educational assistant. Students use ChatGPT to explain difficult concepts, brainstorm ideas, improve writing, and explore new topics. Teachers use it to create lesson plans, quizzes, and assignments. But who is benefiting the most from this assistance?
If men dominate usage, especially in subjects like mathematics, programming, and economics—fields where men are already overrepresented in many education systems—then ChatGPT will become more optimized in those areas. It will learn to respond more effectively to the kinds of questions men ask, in the tones and formats they prefer. This creates an optimization gap. The model becomes better at helping users who already have high digital literacy and comfort with self-directed learning.
Women, especially those in fields that are underrepresented in the model’s training or user input, may not get the same level of quality or accuracy in their responses. And if their questions are less frequent—perhaps because of confidence gaps, or because they find the model’s tone uninviting—then the system has less opportunity to learn how to serve them.
This creates a vicious cycle. The AI becomes better at helping users who already feel empowered, and worse at helping those who need it most. That’s not just a loss of utility—it’s a direct reinforcement of educational inequality.
Career Development and Professional Advancement
AI is also reshaping how people build and advance their careers. From resume writing to job searching, interview preparation to business planning, users are increasingly turning to ChatGPT as a virtual career coach. But again, gendered usage patterns have profound implications for whose professional experiences are represented—and whose are erased.
Imagine two users seeking promotion advice. One is a man working in a tech startup; the other is a woman navigating a leadership role in a nonprofit. If ChatGPT’s training and fine-tuning have been shaped mostly by male users in tech, it may offer advice that centers on performance metrics, negotiation tactics, and entrepreneurial boldness—strategies that align with traditionally masculine professional norms.
Meanwhile, the woman may be dealing with dynamics ChatGPT doesn’t understand as well—subtle workplace bias, emotional labor, balancing caregiving responsibilities, or navigating leadership styles that don’t fit the dominant mold. If her queries return generic, tone-deaf, or impractical advice, she’s less likely to use the tool again. And if she stops engaging, the model learns even less about people like her. Once again, a feedback loop of exclusion.
AI’s role in professional development isn’t just passive. It’s shaping how people understand success, how they prepare for opportunities, and what they believe they’re capable of. When those definitions are skewed by user demographics, we risk building a world in which only certain types of people thrive with AI’s help, while others are quietly left behind.
Gender Bias in Health and Wellness Support
Another major use case for AI tools like ChatGPT is personal wellness. People use it to ask about symptoms, mental health challenges, diet plans, exercise routines, and more. But gender is deeply embedded in health experiences, and failure to reflect that can have dangerous consequences.
For instance, medical symptoms often present differently in women than in men, particularly in areas like heart disease, autoimmune disorders, or hormonal imbalances. If AI systems are disproportionately shaped by male users asking about male bodies and male experiences, they may provide less accurate or useful information to women.
Even in mental health support—where many users turn to ChatGPT for coping strategies or emotional insight—gender matters. A woman dealing with postpartum depression, for example, might receive advice that fails to consider her context if the model hasn’t been trained on similar inputs. Worse, it might offer solutions that sound dismissive, condescending, or out of sync with her needs.
There is also a risk of normalizing dangerous or unhealthy behavior. If the model has absorbed lots of queries about extreme dieting, overtraining, or productivity hacks—topics more common among high-performance, often male-oriented user groups—it may start to treat those behaviors as standard. This can alienate users looking for more compassionate, balanced, or body-positive perspectives.
Ultimately, the absence of gender-sensitive input leads to an absence of gender-aware output. In the realm of health and wellness, that’s not just a service gap—it’s a safety risk.
Intersectionality: Gender is Not the Only Axis of Exclusion
It’s important to recognize that gender disparity in AI usage doesn’t exist in isolation. It intersects with other forms of exclusion, including race, class, geography, disability, age, and language. While this article focuses on the gender gap, the broader issue is one of representational justice: who gets to shape the tools of the future, and who is ignored in the process.
For example, a white, educated, urban woman may have far fewer barriers to AI access than a rural woman of color who speaks English as a second language. The former may still be underrepresented, but she’s not excluded in the same ways. Conversely, men from historically marginalized communities may also find their experiences poorly reflected in AI interactions, despite being counted in the dominant gender statistic.
Understanding gender disparity in AI use requires a nuanced, intersectional lens. If we treat women as a monolithic group—or assume that increasing their usage numbers will fix all representational problems—we risk reproducing the same blind spots that caused the imbalance in the first place.
What’s needed is a deliberate effort to understand which women are missing from the data, why they’re not engaging, and what design or outreach interventions might make these tools more relevant to them. This kind of granular analysis is essential if we want generative AI to be not just broadly available, but genuinely equitable.
AI and the Risk of Reinforcing Harmful Norms
One of the subtler but most insidious consequences of gendered AI usage is the normalization of outdated, stereotypical, or harmful social norms. AI doesn’t “understand” the world—it reflects patterns in its data. If those patterns overrepresent male perspectives, and particularly dominant, Western, male perspectives, the model may start to replicate those norms as if they’re objective truths.
Consider the kinds of default examples ChatGPT might offer when asked about leadership, relationships, parenting, or professional success. If those examples skew toward traditional nuclear families, corporate hierarchies, or binary gender roles, users who don’t fit those molds may find the answers jarring or irrelevant.
Worse, the model might subtly frame certain behaviors as “normal” and others as “abnormal,” simply because it has encountered one more frequently than the other. A woman asking for advice on navigating a polyamorous relationship, or balancing ambition with motherhood, may find responses that feel judgmental, confusing, or tone-deaf—not because the AI has an opinion, but because it lacks representative data to draw from.
This issue also extends to humor, cultural references, and tone. If most users are men from specific cultural contexts, the model may adopt ways of speaking that feel alienating or even offensive to others. And because AI often hides its sources and doesn’t show its “work,” users may not understand why an answer feels off—they’ll just stop trusting the system.
When that happens, exclusion doesn’t just occur at the access level—it happens at the cultural level. The AI becomes a mirror that only reflects certain people to themselves, while others remain unseen.
The Myth of the Neutral Machine
One of the most persistent and dangerous myths surrounding AI is that it’s neutral, free of bias, politics, or identity. This belief leads many to assume that if the technology is producing unequal outcomes, the problem must lie with the users. But that’s a deeply flawed assumption.
All AI systems are designed by humans. The choices about what data to train on, what behaviors to reward, what language to normalize, and what metrics to optimize are all profoundly human decisions. And when those systems go out into the world, they continue to evolve based on human interactions—interactions that reflect all the inequalities, prejudices, and blind spots of the real world.
So when we see that 75 percent of ChatGPT users are men, we’re not looking at a coincidence. We’re looking at a consequence of decades of gender imbalance in tech, education, design, and access. And if we don’t intervene consciously and systematically, that imbalance will not just persist in AI—it will be amplified by it.
Designing AI for Inclusion: Moving Beyond Representation
Fixing the gender disparity in AI usage is not just a matter of representation—it requires systemic design change. We cannot assume that more women using ChatGPT will automatically make the system more inclusive. Inclusion has to be built into the architecture of the technology, the policies of the companies that develop it, and the social ecosystems that surround it.
To do this effectively, we must begin by asking: What would an inclusive AI system look like? It wouldn’t just reflect the average user—it would actively account for the needs, values, and experiences of those who are not the average user. It would be responsive to variation and flexible in style. It would understand that not all users communicate the same way, and that intelligence is expressed in many forms: through directness and nuance, facts and emotions, logic and empathy.
This shift requires challenging the default settings that define “neutrality” in AI. Too often, neutrality in technology design has meant “fitting the norms of the dominant group.” But true neutrality-or—or fairness—means recognizing and accommodating difference. That includes different ways of asking questions, different priorities in answers, and different contexts in which AI is used.
The Role of Tech Companies: Responsibility, Not Just Innovation
Companies like OpenAI, Google, Anthropic, and Meta play a crucial role in shaping how AI tools are used and by whom. With this power comes responsibility. The idea that tech companies are just “building tools” and not accountable for how they’re used is both outdated and irresponsible.
AI developers must take an active role in ensuring that their platforms are equitable, not just in terms of access, but in outcomes. That means:
- Auditing user data regularly for demographic imbalance.
- Testing outputs across a wide range of social, cultural, and linguistic contexts.
- Consulting with diverse groups throughout the design process, not just at the end.
- Hiring interdisciplinary teams that include ethicists, sociologists, educators, and community advocates alongside engineers and product managers.
And most importantly, they must be transparent. If gender disparities are observed in usage, those should be disclosed, not hidden. If performance is weaker in certain areas (e.g., maternal health, caregiving, gender-based violence), that information should be made public, along with remediation plans. Otherwise, users are left with a false sense of security, assuming the AI is equally competent across all domains.
Rethinking Metrics: What Counts as “Success” in AI?
Part of the reason gender disparities persist in AI systems is that the metrics for “success” are poorly defined. Engagement rates, retention, prompt volume, and user growth are all useful indicators—but they can also obscure deeper inequalities.
For example, if a feature is widely used by men but rarely by women, should it be considered successful? If an AI assistant performs well in technical writing but poorly in empathetic communication, is it a “high-quality” product? If prompt templates are based on male-coded language patterns (e.g., “dominate,” “hack,” “scale”), how will that affect users who speak differently?
To build inclusive AI, developers must look beyond raw numbers and start asking qualitative questions:
- Who is using the tool the most—and why?
- Who isn’t using it—and what are the barriers?
- What kinds of value are different users extracting from the tool?
- Where do people stop using the tool, and what frustrates or alienates them?
These are difficult questions. They don’t have simple answers. But they are essential if we want to create systems that serve the many, not just the few.
Community-Driven AI: Participatory Design as a Solution
One of the most promising models for addressing inclusion in AI is participatory design. Instead of building technology in isolation and then trying to patch over its inequities, participatory design brings affected communities into the design process from the beginning.
This means inviting women—especially women from marginalized groups—to help shape how AI works. It means asking them what kinds of interactions feel helpful, what kinds of content they want more of, what frustrates them about current tools, and how AI could better support their goals.
This isn’t just a feel-good gesture. Participatory design has a strong track record in everything from public health to urban planning. When people have a voice in how technology is built, the end product tends to be more effective, more trusted, and more widely adopted.
AI companies should invest in co-design labs, community advisory boards, and inclusive product testing. They should fund research led by women, especially in the Global South. They should treat marginalized users not as edge cases, but as core collaborators.
Digital Literacy Is Gendered: Closing the Confidence Gap
Another overlooked factor in AI usage disparity is digital literacy—not just in terms of access or skill, but confidence. Studies have shown that even when women and men have similar technical abilities, women often underestimate their proficiency while men overestimate theirs. This “confidence gap” has huge implications for AI usage.
If a woman encounters ChatGPT and assumes it’s too complex, too technical, or “not for her,” she may abandon it quickly, even though it might have been useful. Men, on the other hand, are more likely to tinker, explore, and keep trying until they get results.
Digital literacy initiatives must address this confidence gap head-on. That means offering not just tutorials, but encouragement. Not just tools, but mentorship. It means teaching AI as a conversation, not a command-line. It also means making space for questions like:
- “What happens if I get it wrong?”
- “Is it okay to ask personal or emotional questions?”
- “What does a good prompt look like, and how do I learn from the results?”
Building user confidence is not a “soft” issue. It’s a foundational one. Without it, many users—especially women—will self-select out of systems that are supposed to be inclusive by design.
The Risk of Normalizing Inequity Through Silence
Perhaps the most dangerous outcome of the current gender disparity is normalization. If men continue to dominate AI usage, and that fact goes unchallenged, it may start to seem inevitable. Or worse, it may be taken as a sign that AI is simply “more appealing” to men, or that women “just aren’t interested.”
This is how structural inequality survives: through silence, through rationalization, through lack of accountability. If we let skewed usage patterns go unexamined, they will solidify into assumptions. Those assumptions will influence how products are built, how success is measured, and how future tools are imagined.
We have seen this before in the early days of computing, where men were seen as the default users of technology. It took decades of advocacy, policy change, and cultural pressure to even begin correcting that imbalance. With AI, we cannot afford to wait that long.
A Framework for Gender-Inclusive AI Engagement
To move beyond critique and into change, we need a strategic framework—one that spans product development, policy, and community empowerment. A gender-inclusive AI ecosystem doesn’t emerge from isolated efforts or token gestures. It requires systemic coordination across five interconnected domains:
- Design: Build AI interfaces that are responsive to diverse communication styles, tones, and interaction modes—not just command-based inputs, but conversation, ambiguity, and emotional nuance.
- Access: Ensure that cost, language, internet infrastructure, and platform availability do not limit who can meaningfully use AI tools.
- Education: Expand AI literacy curricula that explicitly address gender barriers, confidence gaps, and underrepresentation, not just technical skill-building, but reflective practice.
- Feedback: Create accessible channels for marginalized users to share what works, what harms, and what’s missing. Then actually act on that feedback, and show the work.
- Accountability: Make inclusion a measurable goal. Tie equity outcomes to internal metrics, public reporting, and stakeholder reviews.
If these five domains are treated not as afterthoughts but as core principles, we stand a chance of reshaping the AI landscape—not only for women, but for everyone marginalized by current defaults.
Global South, Local Voices: Decentering the West
Much of the data that shapes today’s large language models—and much of the conversation about AI inclusion—comes from Western, English-speaking contexts. But globally, the consequences of gendered AI access are even more acute.
In many regions, women face cultural, infrastructural, or legal barriers to technology use. Smartphones may be shared or controlled by family members. Language options may be limited. Literacy levels, both digital and textual, may differ significantly from those assumed by current models. And yet these voices matter profoundly, because they hold lived experience that’s absent from most training data.
If AI is going to be part of education, health, business, and governance around the world, it must be built with—and for—those who’ve historically been left out of technology’s design loop.
That means:
- Funding regional AI research labs run by women.
- Training models on multilingual, culturally diverse data.
- Building local partnerships with women-led organizations.
- Addressing colonial dynamics in data extraction and model deployment.
We don’t just need to “include” the Global South. We need to center it. Because true equity isn’t about giving others access to a Western-built system—it’s about letting them build systems of their own.
Generative AI and the Politics of Default
The deeper challenge in making generative AI more gender-inclusive is that we are, in essence, fighting the power of defaults. The default user. The default language. The default tone. The default values. These defaults are rarely named, but they shape every prompt, every interface, every dataset.
To challenge them, we need to develop what Ruha Benjamin calls a “critical imagination”—a way of seeing not just what technology is, but what it could be. That means imagining interfaces that invite vulnerability rather than mask it. Models that don’t flatten culture, emotion, or identity into sanitized outputs. AI systems that speak from a plurality of truths, not just the dominant narrative.
This work is political. It requires naming bias, confronting inequality, and choosing sides. But it is also creative. It calls us to design not just better tools, but better social contracts. It invites us to ask: What kind of intelligence do we want to build? And for whom?
Who Gets to Shape the Future?
We often talk about AI in the future tense. But the truth is, the most important decisions about AI are being made right now—by developers, researchers, investors, regulators, and early adopters. And that makes this moment both a crisis and an opportunity.
The fact that men make up the majority of generative AI users is not just a reflection of the past. It’s an early signal of who is shaping the cultural norms, the moral questions, the linguistic patterns, and the technical capabilities that future generations will inherit.
If women, nonbinary people, LGBTQ+ communities, and other historically marginalized groups are not part of that shaping process, the resulting systems will not serve them. Worse, they may actively harm them by perpetuating stereotypes, ignoring needs, or consolidating power in ever-narrower circles.
Rebalancing this power dynamic is not just about fairness—it’s about survival. In a world where AI tools influence everything from hiring to healthcare to storytelling, whose perspective gets embedded in the algorithm becomes a matter of justice.
Beyond Inclusion: Toward Co-Creation
Inclusion is important—but it’s not the endpoint. Inclusion still centers the dominant group, asking others to enter their space. What we need is co-creation: systems, norms, and institutions that are built together, from the ground up.
This means:
- Co-authoring datasets with community consent.
- Co-building prompts, interfaces, and use cases with a range of users.
- Co-governing AI through participatory policymaking, not just top-down regulation.
- Co-owning AI systems via public infrastructure, open models, or democratic cooperatives.
In short, AI must not be something that’s built for people. It must be something that’s built with them.
Final Thoughts
The fact that 75% of ChatGPT users are men is not an isolated statistic. It’s a reflection of long-standing dynamics in tech: who feels invited, who is centered, who gets to experiment, and who is quietly excluded.
But this isn’t just about AI. It’s about power. It’s about whose questions get answered, whose problems get prioritized, and whose way of thinking gets reinforced by the systems we’re now embedding into everyday life.
Generative AI will shape how students learn, how workers solve problems, how artists create, and how societies imagine themselves. If this future is disproportionately authored by one demographic, then it risks replicating a long history of imbalance—where tools are built for others, not with them.
But the good news is: nothing about this future is inevitable.
Usage can be shifted. Interfaces can be redesigned. Confidence can be cultivated. Policy can be shaped. Culture can be moved. The gender gap in AI is not a fixed outcome—it’s a choice, made every day, by every prompt, every product decision, every learning module, and every line of code.
So the question is not just “Why are more men using ChatGPT?”
The question is: “What kind of world are we building if they’re the only ones shaping it?”
And the follow-up is even more important:
“What would it take to change that?”
Because the answer is not just technical. It’s cultural. It’s political. It’s personal.
And it starts now.