SIMULATED SELF (IN PROCESS)

Cloning myself to provoke dialogue about our urge to outsource our consciousness to technology.

domain

medium

year

role

thesis

Gen AI/Prompt engineering/Research

2023-2024

solo project

For my MFA thesis, I fine-tuned ChatGPT on my personal data to create my digital clone. This  clone then directed my every activity, every day, for months.

My clone almost immediately limited me to repetitive, demonstrating the narrowing effects of algorithmic over-reliance on our decision-making, independent thinking, and sense of agency.

Key learning

One of the most interesting parts of this thesis for me has been figuring out what doesn't work, and why.

My first prototype of this project was a Unity game where a player would navigate five valleys (metaphors for the "uncanny valley", the unsettling sensation people experience when confronted with a robot with human features). In each valley, they would encounter one of my clones (NPC avatars I'd modeled - see example at top of page), linked to ChatGPT and Amazon Polly to enable live, unique conversations. The five clones were each trained on one of the following datasets as collected from Apple, Facebook, Instagram, Amazon, and Google.

In theory, each clone would have revealed the identity and personality profiles these respective companies assumed about me based on my digital behavior. The differences between them could have exposed that algorithms paint a reductive image of us. It would hopefully have inspired people to lessen their belief of technology as being an all-knowing, all-seeing oracle, and lead them to be more critical of the way their personal data and digital behavioral patterns are harnessed to make conclusions about them outside of their control.

Why it didn't work (based on research and conversations with people at Google, Meta, Apple, and Amazon):