Who decides what the future will look, feel or sound like? When thinking about AI conversation tends towards its biases and asymmetric development and implementation. It is clear that the goals behind its mainstream development haven’t been human-centered nor have they had a horizontal/diverse set of voices embedded in its design. Science fiction has played a very interesting role in the way in which both specialized and non-specialized agents have approached AI.
The strong narratives behind ideas such as singularity and AI overpowering humans have been written and rewritten in novels, movies, podcasts, videogames and more, but how do these narratives impact our relationship with AI? Can we build new narratives which include a more horizontal, sustainable and caring AI? How can we speculate about the future of AI by rethinking what privacy, agency and trust mean for these systems?