Digital Identity as a Right (II): between data protection and the digital representation of the individual
Lorenzo Cotino, president of the Spanish Agency for Data Protection (AEPD)
Imagen de rawpixel.com en Freepik.
In a previous post on this blog (Digital Identity as a Right) highlighted that identity should not be viewed solely as a service or as a technical matter of identification, but rather as a reality closely linked to the legal standing of the individual and to effective access to their rights. That starting point retains its full value. However, technological evolution and the expansion of digital environments invite a broader reflection: alongside classical legal identity, the need to protect the digital identity of the individual is becoming increasingly prominent — understood no longer as a mere authentication tool, but as the individual’s projection into digital environments defined by data, profiles, reputations, inferences and other forms of representation.
Identity has long enjoyed legal recognition at the international level. International human rights law is founded, in its most basic formulation, on the recognition of legal personality (Art. 6 of the Universal Declaration of Human Rights and Art. 16 of the International Covenant on Civil and Political Rights). Particularly clear in this regard is Art. 8 of the Convention on the Rights of the Child, which protects the right of the child to preserve their identity, including elements such as name, nationality and family relations, as well as the link between deprivation of identity, exclusion from access to rights and discrimination (Inter-American Court, case of the Yean and Bosico Girls v. Dominican Republic, judgment of 8 September 2005, Series C No. 130).
In Spain, it suffices to note that Article 53 of Law 20/2011, on the Civil Registry, recognises the right to a name and its registration as a basic element of a person's civil identity. In addition, Article 8 of Organic Law 4/2015 recognises the right of Spanish nationals to obtain the National Identity Document as proof of identity, while Article 13 governs the verification of identity of foreign nationals in Spain. This classical legal foundation demonstrates that identity — including as a right — has traditionally been articulated through formal elements such as name, nationality or civil registration.
A second dimension of the right to identity is now apparent in the sphere of digital identification, namely the ability to verify a person's identity and express their intention in the electronic environment through legally recognised means. This is reflected in the legislation on electronic administration. Thus, Law 39/2015 recognises the right to obtain and use electronic identification and signature means (Art. 13), regulates electronic identification systems for interested parties (Art. 9) and establishes the electronic signature methods accepted in administrative procedures (Art. 10). These provisions allow citizens to verify their identity and express their intention through digital means with full legal validity. Although not formulated as a right, Regulation (EU) No 910/2014 (eIDAS Regulation) establishes a comprehensive legal framework for electronic identification and imposes obligations on Member States to recognise certain electronic identification means and allow their use in digital public services.
The recent evolution towards European Digital Identity Wallets (European Digital Identity Wallet), in the context of the revision of eIDAS, also introduces new challenges from a data protection perspective, particularly with regard to the minimisation of shared attributes, the potential tracking of their use, and the risks of correlation or re-use of information that may affect the configuration of an individual's digital identity. These issues have been analysed both by the AEPD and by the EDPS, which highlights in particular the risks of linkability between credential uses, excessive identification, and the need for an effective design in accordance with the principles of data protection by design and by default.
However, in the digital society, identity — and the right to identity itself — also encompasses the manner in which an individual is represented and treated within data and algorithm ecosystems. It is therefore not confined to administrative electronic identity or authentication mechanisms, but includes the attributes, histories, profiles, inferences, behavioural patterns or digital reputation that shape an individual's projection in the digital environment, and even the reproductions or simulations of their appearance, voice or distinctive features. This understanding of digital identity as a complex, dynamic category constructed from data and representations has been highlighted in recent literature, which underlines that it is a diffuse reality, difficult to confine to a single concept, composed of multiple facets and capable of performing distinct and even contradictory functions. (M. Robles-Carrillo, Digital identity: an approach to its nature, concept, and functionalities, International Journal of Law and Information Technology, 2024).
In this context, very specific problems are beginning to emerge, such as the generation of “digital twins” that replicate a person's behaviour, the use of profiling and scoring systems that determine decisions affecting them, or the persistence and use of digital identities after death — all of which highlight the need for effective control over one's own representation.
The legal problem thus shifts from focusing solely on how a person's identity is verified to encompassing how the digital representation projected onto that person is constructed, disseminated and used, as well as the degree of control the individual retains over that representation. In this context, the individual no longer appears merely as the holder of a document or an authentication means, but also as the subject of a digital representation generated from data and automated processing. That representation may be shaped through behavioural profiles, inferences, activity histories, biometric data, platform reputation, automated scores or even persistent avatars that end up functioning as an operational version of the individual in digital environments.
From this perspective, a contemporary understanding of the right to digital identity could encompass several dimensions. First, protection against the alteration, manipulation or impersonation of digital identity by third parties. Second, the right to know and challenge the attributes, profiles or inferences that contribute to representing the individual in digital environments. Third, the preservation of contextual separations, including the possibility of acting under a pseudonym where that is legitimate. Fourth, protection against systems that impose a single, closed or distorted version of the individual. And finally, the requirement that the technological and organisational architecture of systems does not generate exclusion, dependence or a substantial loss of autonomy over one's own digital representation.
This broader understanding finds an initial formulation in the Charter of Digital Rights (2021). In its Section II, “Right to identity in the digital environment”, the Charter states that “the right to one's own identity is enforceable in the digital environment” and recognises the right to manage one's own identity, its attributes and credentials, expressly adding that “identity may not be controlled, manipulated or impersonated by third parties against the will of the individual”.
The Charter further recognises the right to pseudonymity and, in Section XXVI on the use of neurotechnologies, introduces a particularly intense dimension of this protection by requiring that “each person's control over their own identity” be guaranteed, along with self-determination and control over data relating to their brain processes. The Charter has no binding normative force, but it does offer a formulation of considerable interest, since it shifts the focus from mere electronic identification towards a broader understanding of digital identity as a dimension of the personality deserving of its own protection.
Personal data protection legislation today provides a broad framework for addressing many of the elements that make up digital identity. The very concept of personal data is defined broadly and includes any information relating to an identified or identifiable natural person, thereby encompassing many of the elements that constitute digital identity. On this basis, the principles of processing — in particular accuracy, data minimisation, purpose limitation, and integrity and confidentiality — together with data protection by design and by default, introduce limits aimed at preventing inaccurate, excessive or unduly aggregated representations of the individual. Many particularly serious forms of conduct — such as identity fraud or impersonation — will, in most cases, fall within the scope of processing of personal data without a legal basis under the GDPR, which directly triggers the safeguards and remedies of the GDPR. To this are added the rights of data subjects (CHAPTER III of the GDPR). The right of access (Article 15) and the transparency obligations make it possible to ascertain what data, profiles or inferences are being used about a person. The rights to rectification, erasure, objection and restriction of processing (Articles 16, 17, 21 y 18 respectively) facilitate responses to inaccurate data or unwanted processing. Finally, the GDPR establishes specific safeguards against automated decision-making, including profiling, introducing mechanisms for challenging decisions when an individual is subject to decisions based solely on automated processing which produce legal effects concerning them or similarly significantly affect them.
Data protection law has demonstrated over decades a notable capacity to absorb new legal and social demands arising from technological evolution, progressively integrating problems that initially fell outside its original formulation. However, this framework does not alone exhaust all the issues surrounding digital identity. Its logic centres on controlling the processing of personal data, but it encounters limits when identity is considered as a broader reality. In particular, it does not fully address the overall configuration of identity from multiple data points and inferences, nor does it guarantee control over the social or reputational image of a person that circulates in the public sphere. Nor does it prevent profiles or inferences from ultimately operating in practice as a replacement identity or ensure the continuity of personal identity in fragmented digital environments. The problem of digital identity thus goes beyond the isolated processing of data and raises additional questions about control over the overall representation of the individual and protection against possible substitution by digital constructs generated by third parties.
Alongside data protection, other avenues of legal protection are emerging that may initially seem surprising. Noteworthy is the recent experience of Denmark (European Parliament), where in 2025 the Government, with broad parliamentary support, introduced a bill to reform copyright legislation in response to the phenomenon of deepfakes and hyper-realistic digital imitations generated by artificial intelligence. The bill introduces new provisions in the Danish Copyright Act — in particular the new §§ 65a and 73a — with the aim of strengthening protection against unconsented digital reproductions of a person's appearance, voice or distinctive features. The rationale of the system is to use copyright and related rights tools to prevent the creation and dissemination of digital imitations that simulate a real person. In this way, the affected individual could demand the removal of the content, the cessation of its use and, where appropriate, compensation for damages. The model also provides for enhanced protection for performing artists and retains traditional exceptions such as parody or satire.
In Spain, the debate has also begun to be addressed from the perspective of labour regulation in the cultural sector. In the context of the reform of the Artist's Statute, promoted by the Ministry of Labour and Social Economy, measures have been proposed to regulate the use of generative artificial intelligence systems in cultural production, including the requirement of express agreements for certain uses, possible economic compensation and limits on the substitution of human artistic work. This is a regulatory approach still in development, but significant, as it reflects a similar concern: preventing the ability to digitally replicate a person's voice, appearance or performance from enabling their exploitation without legal control or recognition.
This body of initiatives confirms that the legal problem is no longer confined to fraud or the technical security of digital identification, but also extends to the appropriation and exploitation of the individual's digital projection. From a practical standpoint, data protection, intellectual property, labour regulation and personality rights offer relevant avenues of protection to demand the removal of content, the cessation of unauthorised uses or compensation for damages.
However, each of these models has its limits. Data protection does not always reach the broader dimension of the social or algorithmic representation of the individual. Intellectual property is effective against digital reproductions or imitations, but less suited to addressing the ongoing construction of identity through data, reputation or behaviour. Personality rights remain the natural avenue for addressing infringements of honour, privacy or one's own image, although they do not always respond directly to new forms of digital representation.
This fragmentation reveals that there is not yet a single, fully satisfactory framework. It is foreseeable that for years different protection avenues will coexist in a complementary manner, each covering specific facets of digital identity. At the same time, it is reasonable to anticipate progress towards a more precise recognition of the dimensions requiring protection, particularly with regard to control over the representation of the individual, the accuracy of that representation, and the prevention of its substitution by digital constructs generated by third parties.
In this process, data protection law appears — once again — to be called upon to play a central role. Over recent decades, its principles, rights and rules have demonstrated a remarkable capacity to adapt to technological transformations and to new forms of information processing in digital environments. Without fully exhausting all facets of digital identity, this framework offers a particularly apt basis for integrating many of these new requirements, whether through evolutive interpretations or through specific regulatory developments. It is therefore reasonable to expect that a significant part of the guarantees linked to digital identity will ultimately be consolidated in the field of data protection, particularly where the digital representation of the individual affects decisions concerning them or conditions their access to services and rights, without prejudice to the need for other complementary avenues of protection.
This blog post is related to other materials published by the Innovation and Technology Division of the AEPD, such as:
- Blog post eIDAS2, the European Digital Identity Wallet and the GDPR (I) [jan 2025]
- Blog post eIDAS2, the European Digital Identity Wallet and the GDPR (II) [jun 2025]
- Blog post eIDAS2, the European Digital Identity Wallet and the GDPR (III) [oct 2025]
- Blog post eIDAS2, the European Digital Identity Wallet and the GDPR (IV) [dic 2025]