Temporal selves under siege: Artificial Intelligence and the need for privacy as a right to becoming
65027
wp-singular,post-template-default,single,single-post,postid-65027,single-format-standard,wp-theme-bridge,wp-child-theme-WACC-bridge,bridge-core-3.3.4.2,qodef-qi--no-touch,qi-addons-for-elementor-1.9.3,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.8.2,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.6.1,vc_responsive,elementor-default,elementor-kit-41156

Temporal selves under siege: Artificial Intelligence and the need for privacy as a right to becoming

Lemi Baruh and Mihaela Popescu

When we talk about online privacy, what comes to mind? For decades, the dominant answer in law and everyday thinking has centred on one idea: control over personal information. This approach, called informational privacy, treats our personal data – our clicks, likes, searches, and purchases – as an extension of us. The core belief is that we, as individuals, should have the authority to determine when, how, and to what extent information about us is shared with others.1

This thinking isn’t new. Its roots trace to the 1970s with the Fair Information Practice Principles (FIPPs). These guidelines recommended basic rights like knowing what data is collected, seeing and correcting our records, and limiting how companies use information beyond its original collection purpose. These principles spread globally, shaping standards from the OECD to Council of Europe from the 1980s.2

Today, this “control over data” thinking is the bedrock of major regulations like Europe’s General Data Protection Regulation (GDPR)3 and state-level laws in the U.S. like the California Consumer Privacy Act (CCPA).4 These regulations empower individuals with specific rights: the right to access the data held about them, the right to correct inaccuracies, the right to request deletion, and the right to object to or opt-out of certain processing or the sale of our data. The main mechanism? Usually, it’s “notice and consent” – companies provide a privacy policy (the notice), and we click “agree” (the consent), theoretically putting us in charge of the data flow.

While personal control over information sounds appealing, this approach faces challenges in reality. The vast amount of data collection makes meaningful consent impossible. We’re asked to agree to lengthy privacy policies, but understanding how our data might be used downstream – combined with other datasets or fed into algorithms – is beyond most people’s capability. Often, the choice is simply a “take-it-or-leave-it” clickwrap agreement; refuse, and we lose access to the service entirely.5 This lack of real choice often leads to resignation rather than genuine consent.6

Furthermore, this focus on notice and consent doesn’t fundamentally challenge the business model – what Shoshana Zuboff famously termed “surveillance capitalism.”7 Instead of limiting data harvesting, it often legitimizes it. By getting us to “agree,” the system shifts responsibility onto us, the individuals, while allowing the large-scale collection and monetization of personal data to continue largely unchecked. Regulations like GDPR, while aiming for protection, may even inadvertently strengthen the dominance of large platforms better equipped to handle compliance costs.8

The increased ubiquity of machine learning algorithms and artificial intelligence throws another wrench into the works.9 These technologies operate on a scale and complexity far beyond simple data sharing. Algorithms don’t just store the data we provide; they analyse it to make inferences and predictions about us – our personalities, preferences, vulnerabilities, and future behaviour.10 These algorithmically generated insights, the “outputs,” often exceed the “inputs” we initially consented to share. The complex, proprietary nature of these algorithms – the “black boxes” of the digital age – makes it difficult to understand how these powerful inferences shape our opportunities and experiences.11 Merely having control over our raw data provides little protection against the privacy harms stemming from how that data is interpreted and used by these increasingly powerful algorithmic systems.

Privacy as a Right to Becoming: Protecting the Development of the Autonomous Self

Given the shortcomings of seeing privacy purely as data control, especially in our algorithmic age, we need a different approach. Instead of focusing narrowly on managing information flows, we propose reframing privacy as essential for protecting and nurturing our capacity for self-formation and autonomous action – a concept we call privacy as a right to becoming. This perspective shifts the focus from data points to the person, seeing individuals not just as data subjects, but as socially embedded, temporal beings actively engaged in shaping their own lives.

Central to this idea is autonomy. Not the isolated, purely rational self often imagined in liberal theory, but a relational autonomy that recognizes how we develop our sense of self and our ability to make meaningful choices through our connections with others and within specific social and cultural contexts. True autonomy – the freedom to ask what kind of a person one wants to be and what kind of a life one wants to lead – isn’t just about being free from external obstacles.12 It requires conditions that allow us to authentically identify with our own desires, reflect on them without undue manipulation, and form and commit to life goals and projects.13

Privacy is crucial for creating these conditions. It provides the necessary space – both literally and metaphorically – for the self-reflection and self-discovery vital for autonomous living. This includes controlling access not just to our data (informational privacy), but also to our physical spaces and our decision-making processes.14 These dimensions of privacy help us manage our relationships with ourselves and others, allowing us to develop and exercise the competencies needed to guide our lives according to our own values.

We argue that this process of becoming autonomous unfolds across time. Think about how we understand ourselves – as part of a dynamic narrative. Our identity is shaped through internal dialogue spanning past, present, and future: we reflect on memories and experiences, engage with our current context, and project aspirations for who we want to be. This process helps us make sense of where we’ve come from, where we stand now, and where we’re headed. It enables us to assemble a coherent life story, take ownership of our past actions, and act intentionally toward future goals. In doing so, we become people who can make authentic choices and take responsibility for them. This capacity allows us to experience our lives as a meaningful whole, connecting who we were, who we are, and who we hope to be.15

Sociologists Mustafa Emirbayer and Ann Mische provide a useful framework for understanding this dynamic engagement with time through their concept of the “agentic triad.”16 They argue agency emerges from the interplay of three temporal dimensions:

  • Practical-evaluative dimension (the present): Our capacity to make practical and normative judgments among alternative possible trajectories of action at present.
  • Projective dimension (the future): Our imaginative generation of possible future trajectories and outcomes of action
  • Iterational dimension (the past): Our selective reactivation of past patterns of thought and action to classify past actions in terms of their similarity to a current situation.

We are constantly engaged in this internal temporal conversation, balancing habits from the past with present realities and future hopes. Learning to manage this internal dialogue effectively is how we learn to be autonomous agents. It’s a skill developed through practice – through reflection, making choices, and even making mistakes within a space protected enough to allow for genuine self-exploration. This isn’t about achieving a final, static state of autonomy, but about engaging in the ongoing process of becoming.

Privacy as a right to becoming defends our capacity for vital temporal work. Privacy functions as a necessary condition for navigating the agentic triad. It provides quiet space for iteration, allowing us to reflect on past actions and habits without constant external judgment or pressure of immediate reaction. It safeguards our practical evaluation of the present by allowing moments of focused attention, free from manufactured distractions designed to capture immediate impulses rather than long-term goals.

Artificial Intelligence and the Case for Privacy as a Right to Becoming

Privacy as a dynamic process of “becoming” – essential for shaping our identities and fostering autonomy through an ongoing dialogue with our past, present, and future – faces profound challenges in the age of pervasive artificial intelligence. While the traditional “privacy as control” model already struggles with the sheer scale and complexity of modern data practices, the lens of “privacy as a right to becoming” that we propose helps see how these technologies can more deeply undermine the very foundations of self-development. It’s not just about data points being collected; it’s about how AI and algorithms actively intervene in our temporal experience, potentially derailing our capacity to make meaningful choices (and learn how to make choices) that we can claim as our own, thereby threatening our journey of becoming who we aspire to be.

This framework, which emphasizes protecting our journey towards autonomous selfhood through a rich internal dialogue with our past, present, and future, reveals significant shortcomings in simply viewing privacy as control over data, especially when we consider the impact of artificial intelligence and algorithms. These systems don’t just manage information; they actively shape our experiences and, in doing so, can profoundly disrupt the very processes necessary for us to develop and maintain a coherent sense of self.

Consider how algorithms like recommendation systems curate and re-present our histories. They selectively highlight certain memories and behaviours, often without our awareness of their choices. From the perspective of privacy as a right to becoming this isn’t just about data accuracy or inference veracity; it’s about these algorithmic narratives potentially altering our personal stories and how we view our past and understand ourselves. Our ability to own our narrative, to draw lessons from our past and integrate them into who we are becoming, is crucial for developing autonomy.

Similarly, algorithms shaping our experiences raise concerns beyond loss of information control. Recommendation engines and personalized feeds can create “filter bubbles” that narrow perspectives and limit exposure to diverse viewpoints, which are vital for critical thinking and practical evaluation of agency.17 This invisible algorithmic steering can undermine our ability to make choices reflecting our values and long-term goals. The privacy that traditionally affords breathing room for independent thought is eroded when our present moments are mediated and manipulated by systems designed to steer attention and influence actions.

Looking to the future, the predictive power of algorithms and the “data-driven personas” they construct can significantly impact our projective capacities – our ability to imagine and strive for different futures. When algorithms engage in anticipatory and pre-emptive governance,18 using past data to predict and shape future pathways or target us during vulnerable transitional moments. This isn’t just about data being used; it’s about our potential life trajectories being directed and, perhaps, limited by predictive models. Privacy as a right to becoming is about our ability to defend ourselves and engage authentically and autonomously with our past, present, and future, shielding the core processes of self-development from these profound algorithmic interventions.

The European Union’s AI Act and emerging “neurorights” field indicate growing recognition that privacy challenges brought by AI extend beyond data control issues. These initiatives address privacy as a right to becoming – protecting our autonomous self-development in an algorithmic world. They acknowledge that technological interventions, particularly AI and neurological interfaces, can impact our freedom of thought and ability to make choices free from manipulation, which are vital for our process of becoming.

The EU AI Act aims to address such deeper concerns by, for example, banning AI systems that use “subliminal techniques beyond a person’s consciousness to materially distort behaviour” or exploit vulnerabilities related to age or disability.19 This directly resonates with privacy as a right to becoming by recognizing that certain algorithmic influences threaten our core ability to self-determine. However, the AI Act still struggles to define “manipulation” or impairment of “informed decision making” in complex AI interactions and what makes AI based manipulation different from other manipulative practices.

Here, privacy as a right to becoming offers richer vocabulary. It articulates why AI manipulations are damaging, not just from divergent interests between organizations and data subjects, but because they interfere with our internal temporal dialogue – our ability to reflect on past, evaluate present, and project future authentically. It explains harm through undermining competencies for skilful choice, narrative control, and self-justification. This allows privacy as a right to becoming to be more flexible in adjusting to shifts in the operational logic of future technologies.

Similarly, discussions around “neurorights” arise from the development of neurotechnologies like brain-computer interfaces (BCIs), which promise direct pathways to “reading” and even influencing brain activity. The concern here is profound, touching upon mental privacy, cognitive liberty, and the very integrity of our thoughts and mental states.20 The privacy as a right to becoming framework emphasizes that our minds are not just static entities needing protection, but are the dynamic seat of self-formation. Interference with our cognitive processes, memories, or future-oriented thought directly impacts our ability to construct and live our life stories.

Privacy as a right to becoming extends the neurorights conversation by highlighting that the harm isn’t just about neurological or other biometrical data extraction, but about disrupting the ongoing, temporally-grounded process of self-development that defines us as autonomous individuals. It helps us understand that protecting the “mind” is also about protecting the unfolding narrative of a life. Ultimately, privacy as a right to becoming provides a unifying lens that focuses on the individual’s lifelong journey of self-creation, emphasizing the need to safeguard the temporal dimensions of autonomy.

Notes

1. Woodrow Hartzog, “The Inadequate, Invaluable Fair Information Practices,” Maryland Law Review 76 (2017): 952–77.\\uc0\\u8221{} {\\i{}Maryland Law Review} 76 (2017

2. Fred H. Cate, “The Failure of Fair Information Practice Principles,” in Consumer Protection in the Age of the “Information Economy,” ed. Jane K. Winn (Abingdon, Oxon: Routledge, 2006), 343–79.

3. European Parliament and Council of the European Union, “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” L 119 Official Journal of the European Union § (2016).

4. “California Consumer Privacy Act of 2018,” California Civil Code § 1798.100 et seq. (2018).\\uc0\\u8221{} California Civil Code \\uc0\\u167{} 1798.100 et seq. (2018

5. Lemi Baruh and Mihaela Popescu, “Big Data Analytics and the Limits of Privacy Self-Management,” New Media and Society 19, no. 4 (2017): 579–96, https://doi.org/10.1177/1461444815614001.

6. Nora A Draper and Joseph Turow, “The Corporate Cultivation of Digital Resignation,” New Media & Society 21, no. 8 (August 2019): 1824–39.

7. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019).

8. Damien Geradin, Theano Karanikioti, and Dimitrios Katsifis, “GDPR Myopia: How a Well-Intended Regulation Ended up Favouring Large Online Platforms – the Case of Ad Tech,” European Competition Journal 17, no. 1 (January 2, 2021): 47–92.

9. Daniel J Solove, “Artificial Intelligence and Privacy,” Florida Law Review, 2025.

10. Tal Z Zarsky, “Privacy and Manipulation in the Digital Age,” Theoretical Inquiries in Law 20, no. 1 (2019): 157–88; Karen Yeung, “‘Hypernudge’: Big Data as a Mode of Regulation by Design,” Information, Communication & Society 20, no. 1 (January 2017): 118–36.

11. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA: Harvard University Press, 2015).

12. Beate Rössler, The Value of Privacy, trans. R. D. V. Glasgow (Cambridge, UK: Polity Press, 2005).

13. Rössler.

14. Neil Richards, Why Privacy Matters (New York, NY: Oxford University Press, 2022).

15. Marya Schechtman, “The Narrative Self,” in The Oxford Handbook of the Self, ed. Shaun Gallagher (Oxford, UK: Oxford University Press, 2011), 394–418.

16. Mustafa Emirbayer and Ann Mische, “What Is Agency?,” American Journal of Sociology 103, no. 4 (1998): 962–1023.

17. Sofia Bonicalzi, Mario De Caro, and Benedetta Giovanola, “Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems,” Topoi 42, no. 3 (July 2023): 819–32.

18. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.

19. “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act),” Pub. L. No. 2024, L (2024).

20. Daniel Susser and Laura Y. Cabrera, “Brain Data in Context: Are New Rights the Way to Mental and Brain Privacy?,” AJOB Neuroscience 15, no. 2 (April 2, 2024): 122–33.\\uc0\\u8221{} {\\i{}AJOB Neuroscience} 15, no. 2 (April 2, 2024): 122-33.

Lemi Baruh (Ph.D., University of Pennsylvania, Annenberg School for Communication, 2007) is a Senior Lecturer at the School of Communication and Arts, The University of Queensland, and the co-director of the Social Interaction and Media Lab at Koç University, Istanbul, Turkey. His research focuses on digital media and communication technologies, exploring issues such as interpersonal relationships, online safety and security, privacy, surveillance, and decision-making.

Mihaela Popescu (Ph.D., University of Pennsylvania, Annenberg School for Communication) is a Professor of Digital Media in the Department of Communication & Media at California State University, San Bernardino, and the Faculty Director of the Extended Reality for Learning Lab (xREAL). Her research and teaching interests include media and communication policies, privacy and surveillance, immersive and algorithmic media, and human-machine communication.

No Comments

Sorry, the comment form is closed at this time.