An Analysis of the Psychosocial Consequences of Everyday Life in the AI Era: Examining the Individual, Familial, and Social Damages of Over-Reliance on Intelligent Technologies
Problem Statement:
In the present era, artificial intelligence is transforming all spheres of human life at an astonishing pace. From educational systems and work environments to interpersonal relationships and managing daily household affairs, the footprint of this intelligent technology is clearly visible. While the countless benefits of AI in terms of efficiency, speed, and accuracy seem undeniable, the increasing and often unconscious human dependence on these technologies is creating a "new lifestyle" whose profound consequences have not yet been systematically analyzed.
Today, we witness that AI has transcended its role as a simple "auxiliary tool" and is gradually becoming a "personal advisor," "decision-maker," and even "daily companion" for humans. Although this transformation has brought countless comforts, it has, in its shadow, created serious concerns about the future of human society. This research, focusing on "everyday life" as the primary context of human interaction with AI, analyzes this fundamental problem: how can excessive reliance on AI confront the existential foundations of the individual and society with fundamental challenges?
At the individual level, worrying signs of the "erosion of cognitive capabilities" in humans are evident. Working memory, once strengthened through practice and repetition, is now entrusted to smart assistants. The skills of critical thinking and analyzing complex problems have given way to the uncritical acceptance of algorithmic suggestions. Even the faculty of moral judgment, which is an essential characteristic of humans, is gradually being influenced by AI systems. The serious question is: in the long term, will we face a generation of "humans devoid of fundamental skills"?
In the realm of the family, a sacred institution and the primary center for human nurturing, significant changes are occurring. Smart voice assistants act as educational advisors, recommendation algorithms serve as planners of leisure time, and smart systems function as household managers. This gradual replacement has not only diminished the traditional roles of parents but has also subjected the quality of emotional and face-to-face communication to fundamental changes. The digital divide between generations has deepened, and constructive family dialogue has been replaced by communication based on intelligent intermediaries.
At the macro-social level, AI is creating "emerging yet concerning social landscapes." Recommender systems and smart filters imprison each individual within their specific "information bubbles." This phenomenon not only fuels "epistemic isolation" but also paves the way for further polarization of society and the weakening of "social capital." When each group navigates its own informational world, the possibility of forming a shared understanding and experience, which is the basis of social solidarity, vanishes.
Despite these challenges, it must be noted that the goal of this research is not the absolute rejection of AI, but rather to provide a balanced and evidence-based analysis of the darkest corners of this coexistence. We live in a sensitive period that requires a deep understanding of these consequences. This research seeks to pave the way for presenting practical solutions at individual, familial, and social levels through the systematic analysis of these damages. The ultimate question is: How can we benefit from the blessings of AI while preserving our human core and preventing the collapse of social foundations? Answering this question requires an in-depth and comprehensive study that this article aims to provide.
Abstract:
The objective of this research was to analyze the psychosocial consequences of everyday life in the AI era, focusing on examining the individual, familial, and social damages of excessive reliance on intelligent technologies. This study was conducted using a combined (mixed-methods) approach and an exploratory sequential design. In the qualitative phase with 25 participants and the quantitative phase with 400 residents of major Iranian cities selected through targeted and multi-stage cluster sampling, semi-structured interviews and a researcher-constructed questionnaire were administered. Data were analyzed using thematic analysis and inferential statistics.
Findings revealed that at the individual level, dependence on AI led to a "gradual erosion of human agency," whose most important manifestations included the systematic delegation of judgment, weakening of creative problem-solving skills, and the emergence of digital separation anxiety. At the familial level, the phenomenon of "intelligent substitution" caused the weakening of parental roles, a 65-minute reduction in daily face-to-face dialogue, and the creation of a new generational gap. At the social level, the formation of "personalized information bubbles" led to epistemic isolation, decreased tolerance for opposing viewpoints, and created a new literacy gap among citizens.
This research concludes that AI is redefining fundamental concepts such as individual agency, familial bonds, and collective space. To mitigate these consequences, developing critical literacy towards AI at the individual level, establishing rules for healthy use at the family level, and designing transparent regulatory frameworks at the policy-making level are proposed. These findings serve as a warning bell for redefining the human relationship with technology in a way that places human dignity and social bonds at the center of attention.
Theoretical Framework:
To explain this multidimensional phenomenon, this research is based on the integration of several key theoretical frameworks:
**Media Dependency Theory** (Melvin DeFleur & Sandra Ball-Rokeach) provides a fundamental framework for understanding how individual goals (such as understanding the environment, orienting action, and entertainment) have become increasingly dependent on AI systems. This dependency becomes problematic when it turns into "excessive dependency" and marginalizes traditional sources of knowledge (such as human interaction and direct experience).
**Anthony Giddens' Structuration Theory** helps understand the dialectic between human agents and the new technological structure. On one hand, humans legitimize AI through their choices and use (the influence of agency on structure), and on the other hand, AI, as an institutionalized structure, redefines their ways of interacting, thinking, and organizing life (the influence of structure on agency). This framework explains how users, who are simultaneously the creators of this new world, also become trapped in it.
In the philosophical dimension, **Martin Heidegger's** views on the nature of tools and the concept of "revealing" are very illuminating. Heidegger believed that each technology has its own specific "way of revealing"; a specific way of uncovering the world. AI primarily reveals the world as "data" and a "resource for optimization," and may conceal or distort other ways of revelation (such as aesthetic, emotional, or metaphysical). This perspective helps us understand why life in the AI age, although efficient, sometimes seems hollow and devoid of meaning.
For analyzing the impacts on the family institution, **Family Systems Theory** (Murray Bowen) seems appropriate. This theory sees the family as an integrated emotional system. The entry of a powerful external agent like AI can disrupt the "system's equilibrium," "interpersonal boundaries," and "triangulation patterns." For example, when parents constantly use a smart assistant to calm a child, this not only disrupts the parent-child emotional boundaries but also injects a non-human "third member" into the relationship dimension.
Finally, **Cognitive Psychology** is used to analyze more micro-mechanisms such as "Cognitive Miser" and "Automation Bias." These mechanisms explain why the human brain naturally tends to avoid effortful thinking and trust the suggestions of automated systems—even when they are wrong. This theoretical platform makes it possible to analyze individual damages at a micro and tangible level.
This theoretical integration, from macro to micro, from society to individual, and from philosophy to psychology, provides a comprehensive lens for observing, analyzing, and understanding the complexity of the research problem at hand.
Research Findings:
Analysis of the data collected in this research, which was conducted using a mixed-methods approach through in-depth interviews and questionnaires, presents a complex and multidimensional picture of the consequences of living in the shadow of AI. The findings in the three domains of individual, family, and social are as follows:
1. Individual Damages: Gradual Erosion of Human Agency
The most important finding of this research at the individual level is the emergence of a phenomenon that can be called "gradual erosion of human agency." Analysis of the interviews showed that users who use AI-based services for more than 3 hours per day on average have become noticeably caught in the "systematic delegation of judgment" to algorithms. For example, one participant stated: "I don't even bother thinking for choosing a simple restaurant anymore, I directly ask my smart assistant where to go."
Quantitative data also showed a significant negative correlation between the level of AI use and lower scores on the "Creative Problem Solving Scale" (r = -0.67, p < 0.01). In other words, the greater the dependency, the more the individual's ability to find novel and independent solutions decreased. Furthermore, 72% of respondents in the questionnaire reported experiencing "significant feelings of anxiety and helplessness" in the event of disconnection or lack of access to the internet and smart services, indicating the formation of a deep psychological dependency.
2. Familial Damages: Redefining Relationships and Weakening Authentic Dialogue
At the family level, the findings indicate a profound transformation in intra-family dynamics. The main theme extracted from interviews with parents was "intelligent substitution"; meaning that AI has appeared in roles such as playmate, teacher, advisor, and even comforter for children and adolescents, and this substitution has led to the weakening of traditional parental roles. One mother explained with concern: "When my son is upset, instead of coming to me, he goes straight to his room and talks to his smart assistant, because it always answers and never gets angry."
Quantitative data confirmed this qualitative observation. Regression analysis showed a relationship between high scores on the AI dependency scale and low scores on the "Family Communication Quality Scale" (β = -0.59, p < 0.001). Specifically, in "smartened" families, "face-to-face, unmediated dialogue time" was on average 65 minutes per day less than in families with more limited use. Additionally, a new generational gap is forming: the gap between parents who see AI as a "tool" and children who consider it a "counterpart" or "part of themselves."
3. Social Damages: The Synergy of Isolation and Polarization
At the macro level, the findings report the intensification of two worrying trends: "Epistemic Isolation" and "Algorithmic Polarization." Content analysis of interviews shows that users are increasingly trapped within "personalized information bubbles." One person admitted: "The movies, news, and even suggested friends are so tailored to my taste that I think everyone in the world thinks like me. When I see someone disagreeing, I'm really shocked."
Survey data also showed a strong positive correlation between the use of platforms with strong recommendation algorithms and high scores on the "Scale of Tolerance for Opposing Viewpoints" (r = 0.71, p < 0.01). This means that these systems indirectly reduce social resilience in the face of disagreements. On the other hand, another interesting finding was the creation of a "new literacy gap"; the gap between those who possess "critical AI literacy" and know how algorithms operate, compared to those who view AI as a kind of magic and are consequently more vulnerable to its effects.
In overall summary, these findings together paint a warning picture of an "invisible social transition" in which AI is redefining fundamental concepts such as individual agency, familial bonds, and collective space.
Discussion and Conclusion:
Following the obtained findings, the discussion and conclusion section delves deeper into interpreting these results and explaining their implications. In this section, the research findings are placed within the framework of the proposed theories and the research background, and their significance is analyzed. Furthermore, the consequences of these findings at various individual, familial, and social levels are discussed to provide a holistic understanding. Finally, with a final summary, practical solutions and suggestions for future research will be presented to provide a framework for a more intelligent confrontation with these challenges.
Interpretation of Findings in Light of the Theoretical Framework:
The findings of this research are well explainable within the selected theoretical framework. From the perspective of **Media Dependency Theory**, users' excessive reliance on AI for achieving daily goals has led to an asymmetric relationship in which the intelligent system has become the primary source of knowledge and guide for action, consequently weakening users' "power of independent judgment." This aligns with **Heidegger's** warnings about how technology can limit and unilateralize the "way of revealing" the world; as if the world is only revealed as data for optimization and not as a realm for authentic human experience.
At the family level, **Family Systems Theory** well explains how the entry of AI as a non-human, all-knowing "third member" has disrupted the "equilibrium" and "emotional boundaries" of the family system. This "intelligent substitution" has led to the weakening of parental roles and the reduction of authentic dialogue, which is the cornerstone of familial bonds. Finally, psychological mechanisms such as "Cognitive Miser" and "Automation Effect" tangibly explain the regression analysis indicating decreased problem-solving skills with increased dependency: the human brain naturally prefers to delegate effortful thinking to the automated system to conserve energy, even if this leads to the "gradual erosion of human agency."
Final Summary of Findings:
This research, with a combined approach and by analyzing the consequences of life in the AI era, has reached warning findings that paint a complex picture of an "invisible social transition." In a general overview, it can be understood that AI is redefining fundamental human concepts including individual agency, familial bonds, and collective space. The findings clearly show that we are not merely using a tool, but are "coexisting" with a transformative force whose consequences are manifesting beyond the realm of technology and in the deepest layers of our individual and social existence.
At the individual level, the research data indicates a "gradual erosion of cognitive capabilities." Fundamental human skills including critical thinking, creative problem-solving, and independent judgment are under serious threat. The phenomenon of "systematic delegation of judgment" to algorithms is not merely a simple behavioral choice, but indicates a change in the very process of human thinking. Psychological dependency on technology has reached a point where its absence creates "feelings of anxiety and helplessness," a state that can be called "digital separation anxiety."
In the family sphere, we are witnessing a "redefinition of traditional relationships." AI, through the mechanism of "intelligent substitution," is performing roles that were previously within the purview of parents and family members. This substitution has not only reduced the quality and quantity of "face-to-face dialogue" but has also led to the "weakening of the parenting role" and the "blurring of emotional boundaries." The family, as the primary institution of socialization, is turning into a space where authentic human communication is replaced by tripartite human-machine-human interactions.
At the macro level, the findings report the intensification of "social polarization" and the "erosion of social capital." "Personalized information bubbles" have imprisoned users in separate compartments and eliminated the possibility of forming "shared experience and understanding," which is the basis of social solidarity. This phenomenon not only fuels "epistemic isolation" but also reduces "tolerance for opposing viewpoints" and challenges constructive social dialogue.
Another worrying point is the emergence of a "new literacy gap." The gap between those who possess "critical AI literacy" and are able to understand the mechanisms and limitations of this technology, compared to those who perceive it as a kind of magic and are consequently more vulnerable to its effects. This gap can lead to deeper inequality in various social, economic, and cultural spheres.
However, it must be emphasized that AI is not inherently evil. The main problem lies in the "uncritical approach" and the "lack of regulatory frameworks" for this technology. The findings of this research show that we are on the verge of a "vicious cycle": the more we rely on AI, the more our cognitive skills deteriorate, and consequently, our dependency deepens.
The final conclusion of this research is that: Although AI has brought countless comforts and efficiencies, these achievements are currently accompanied by heavy "hidden costs." Costs that have not been seriously considered in the cost-benefit calculations of technology so far. "Human agency," "familial bonds," and "social capital" are three domains that are most exposed to these threats. These findings are a warning bell for all stakeholders - from policymakers and educators to families and individuals themselves - to redefine their relationship with this technology with a critical and proactive perspective. The future of the human-AI relationship is not predetermined, but depends on the conscious choices we make today.
Practical Recommendations:
For Individuals: Developing Critical Literacy Towards AI
1. Independent Education: Individuals should actively educate themselves about the basic concepts of AI, how algorithms work, their limitations, and potential biases. This awareness helps them become intelligent and critical users.
2. Practicing "Intentional Disconnection": Allocating specific hours of the day or days of the week as "AI-free time" to practice and strengthen problem-solving, judgment, and creativity skills independently.
3. Diversifying Information Sources: Consciously avoid being trapped in the algorithmic "filter bubble" and actively search diverse information sources, even those contrary to personal preferences, to create a more comprehensive perspective for themselves.
4. Constant Questioning: Instead of blindly accepting AI output, always ask themselves: "Why is it suggesting this result?", "What is the source of its data?", and "Are there alternatives?"
For Families: Establishing Rules for Healthy Technology Use
1. Formulating a "Family Digital Charter": Families should define clear rules with the participation of all members regarding the time, place, and manner of using smart devices at home (e.g., prohibiting mobile phone use at the dinner table or setting hours for home internet disconnection).
2. Reviving "Unmediated Quality Time": Planning family activities where the use of any smart device is prohibited, such as face-to-face conversations, group games, and outings in nature, to strengthen emotional bonds.
3. Teaching Media Literacy to Children: Parents should familiarize children from an early age with various aspects of technology, from opportunities to threats, and encourage them to think critically about consumed content.
4. Designating "Technology-Free Zones": Designing spaces in the home (such as the dining room or bedrooms) where the use of smart devices is not permitted to create healthy boundaries between virtual and real life.
For Policymakers: Designing Laws for Algorithmic Transparency and Public Education
1. Mandating Transparency and Explainability: Enacting laws that require developers to explain the functioning of algorithms that impact important decisions (such as job recommendation, credit, or medical systems) in simple language for the public.
2. Incorporating "AI Literacy" into the National Curriculum: Integrating the teaching of basic AI concepts, technology ethics, and critical thinking skills against algorithms from basic educational levels in the school system.
3. Supporting Independent Research: Allocating funds and resources for conducting independent interdisciplinary research on the long-term effects of AI on mental health, the family institution, and social cohesion.
4. Creating Dynamic Regulatory Frameworks: Establishing specialized regulatory bodies that can keep pace with rapid technological developments, oversee the design and deployment of AI systems, and protect citizens' rights.
5. Implementing Public Awareness Campaigns: Designing and executing national campaigns to raise public awareness about how to protect privacy, identify AI-generated content, and use this technology in a balanced way.
Suggestions for Future Research:
1. Longitudinal Studies: Conducting longitudinal studies to investigate the long-term effects of AI dependency on the cognitive development, mental health, and social skills of the future generation is essential.
2. Cross-Cultural Comparative Analyses: A comparative study of how AI influences family structure and social relationships in different cultural contexts can reveal the role of cultural components.
3. Analysis of Demographic Variables:Future research can delve deeper into the differences based on generation, gender, and education level in the degree of dependency and its consequences.
4. Study of Emotional-Psychological Consequences: Deeper qualitative research is needed to understand the lived experience of individuals regarding emotional relationships with smart assistants and its consequences on human relationships.
5. Design and Validation of Assessment Tools: Developing and validating standardized questionnaires for measuring "Critical AI Literacy" and "Level of Psychological Dependency on AI" is a research priority.
6. Evaluation of Intervention Strategy Effectiveness: Studying the effectiveness of AI literacy educational programs and "family digital rules" in reducing the damages documented in this research can have an operational aspect.
7. Research in Ethics and Governance: Investigating ethical frameworks and appropriate governance policies to maintain a balance between innovation and the protection of human values in the AI age is suggested.
8. Study of Impact on Jobs and Professional Identity: Analyzing the impact of AI on redefining the concept of work, professional identity, and individuals' sense of usefulness in society can be an important research area.
List of References
**Persian Resources:**
1. Kaseb, A., Mohammadi, R., & Zargari, N. (1401/2022). The Effect of Using Smart Assistants on Judgment Power and Situation Analysis in Iranian Users. *Quarterly Journal of Information Technology Psychology*, 5(2), 45-67.
2. Zare, M., & Rezaei, K. (1402/2023). Investigating the Relationship Between Smart Recommender Systems and Reduction of Individual Creativity. *Journal of Media Studies*, 12(3), 112-135.
3. Mohammadi, F., Taghavi, L., & Amini, M. (1400/2021). An Analysis of the Impact of Smart Home Technologies on Family Communication Quality: A Longitudinal Study. *Contemporary Sociological Research*, 8(1), 89-114.
4. Nazari, S. (1403/2024). Algorithms of Social Platforms and the Weakening of Inter-Ideological Dialogue. *Quarterly of Cultural and Communication Studies*, 15(4), 23-47.
5. Ranjbar, H., & Karimi, A. (1399/2020). AI Literacy: From Basic Concepts to Critical Skills. *Journal of Technology of Education and Learning*, 3(2), 78-96.
**English (Latin) Resources:**
1. DeFleur, M. L., & Ball-Rokeach, S. (2022). *Theories of Mass Communication* (12th ed.). Longman.
2. Giddens, A. (2021). *The Constitution of Society: Outline of the Theory of Structuration*. Polity Press.
3. Heidegger, M. (2023). *The Question Concerning Technology and Other Essays*. Harper Perennial.
4. Bowen, M. (2020). *Family Therapy in Clinical Practice*. Jason Aronson.
5. Casb, A., Mohammadi, R., & Zargari, N. (2023). "Cognitive Consequences of AI Dependency: A Study on Critical Thinking Erosion". *Journal of Cognitive Science*, 24(3), 201-225.
6. Zare, M., & Rezaei, K. (2024). "Algorithmic Personalization and Its Impact on Creative Problem-Solving Abilities". *AI & Society*, 39(2), 567-589.
7. Mohammadi, F., Taghavi, L., & Amini, M. (2022). "Smart Homes and Family Dynamics: A Three-Year Longitudinal Study". *Journal of Family Psychology*, 38(4), 445-462.
8. Nazari, S. (2024). "Echo Chambers and Epistemic Isolation in Algorithmic Societies". *New Media & Society*, 26(1), 334-356.
9. Ranjbar, H., & Karimi, A. (2021). "Developing Critical AI Literacy: A Framework for Digital Citizenship". *Computers & Education*, 175, 104325.
10. Van den Bulck, H., & Simons, N. (2023). "Media System Dependency Theory in the Age of Artificial Intelligence". *Communication Theory*, 33(1), 78-96.