Behind the Interface: Opaque Reasoning
Behind the interface examines the technocultural dimensions of working with (text) data, with the understanding that computing infrastructure and practices are not neutral but emerge from complicated historical lineages that often remain hidden to the user. By peering behind the interface at the circumstances, biases and assumptions surrounding the layers of decision-making involved in developing technologies, we encourage you to consider how structures of inequality become hard-coded into the tools and conventions of data science and how we can work towards opening up new sites of resistance and critique.
** COMING SOON ** <!β While some of the computational text analysis techniques we have explored in the series are relatively easy to grasp - named entity recognition predicts how likely a word is to be a proper name - it is more challenging to describe how topic modeling works. The clusters of words - topics - can be reshuffled, returning different results.
In Seeing without knowing, Ananny and Crawford draw attention to the limits of transparency as an ideal βfor understanding and governing algorithmic systems.β β>