Our prototype encompass three steps detailed below, evoking a typical processing of your online traces:
(1) Extract >>> (2) Profile >>> (3) Recommend.
Both second and third steps rely on ChatGPT, a language model trained on a large corpus of text.
Although it enabled us quick prototyping, clear downsides and pitfalls come with it, as shortly mentionned here.
::::[1] Extract:::: It processes the html data you provide to extract a list of Google Search queries.
::::[2] Profile::::
Here, we prompt ChatGPT with the participant's search historics previously extracted, and ask it to speculate about the personality, desires, hopes, fears, intentions, values, beliefs, personal or political opinion, psychological states, etc., of the personna behind these online traces.
From this, emerge your digital double, which is a fictional character inspired from the participant's online traces.
::::[3] Recommend::::
Given this digital double profile, we prompt ChatGPT to provide targetted recommendations for the personna.
Here, beyond the 'please no' recommender --imitating basic capitalist recommenders--, we experiment with different flavors of recommenders.
Each of the recommender you can test in our experiment corresponds to a different prompt fed to ChatGPT.
::::Please no:::: Recommender enacting a neoliberal logic of consumption of products, services or experiences.
::::Vulnerable::::: Recommender encouraging you to acknowledge, embrace and share your vulnerability, by suggesting you things you could be vulnerable to.
::::Care::::: Recommender fostering care for both living and non-living entities, by suggesting you critters, gestures, or any X you could care about.
::::Dada::::: Recommender aiming to infuse poetic gestures in your everyday life, by suggesting you poetic --borderline absurd, yet somehow mundane-- acts to perform.
Dada R. is inspired by Dadaism and Event Scores*, as used by Yoko Ohno or Fluxus movement. *An event score, is a performance art script consisting of descriptions of actions to be performed.
::::Anti::::: Recommender ironically imagining things you are the less likely to like.
::::Doubt::::: Recommender fostering a practice of doubt, and aiming to make you doubt about your own beliefs, by suggesting you things you could be wrong about.
::::Situated::::: Recommender promoting situated perspectives and experiences you may not have encountered, or you could be more aware of, by suggesting you to familiarise yourself with different forms of otherness.
::::Fungal::::: Recommender rooted in fungal values of resilience, interconnectedness, symbiosis, etc.
Large language models, such as the popular ChatGPT, are able to generate realistic and convincing text, and have seen a wide adoption in a variety of applications. However, relying on such language models can have critical impacts for society, as they may perpetuate biases and stereotypes, and can be used to generate harmful content. Such language model, as ChatGPT can have a normalizing effect, and are sometimes misreprenting minorities or subcultures. During our experiment, we observed that ChatGPT encountered more difficulty in generating compelling text when prompted with queries that involved radical, queer, or non-normative views, while it is more proficient in producing text reflecting neoliberal western perspectives.