{"id":550,"date":"2023-11-07T17:58:42","date_gmt":"2023-11-07T07:58:42","guid":{"rendered":"https:\/\/nedrossiter.org\/?p=550"},"modified":"2024-07-03T11:08:17","modified_gmt":"2024-07-03T01:08:17","slug":"generation-movement-epistemology-the-computational-condition-of-anti-aesthetics","status":"publish","type":"post","link":"https:\/\/nedrossiter.org\/?p=550","title":{"rendered":"Generation, Movement, Epistemology: The Computational Condition of Anti-Aesthetics"},"content":{"rendered":"<h2><strong>Zoe Horn and Ned Rossiter<\/strong><\/h2>\n<p>&nbsp;<\/p>\n<blockquote>\n<p>\u201cImages exist insofar as their media-habitats, ecosystems, and social practices exist and function to provide the structure of cognitive patterns for them.\u201d<\/p>\n<p>Lydia H. Liu,<em> The Freudian Robot: Digital Media and the Future of the Unconscious<\/em> (Chicago and London: University of Chicago Press, 2010), 219.<\/p>\n<\/blockquote>\n<p>The public debut of \u2018generative\u2019 AI tools in late 2022 spawned a flurry of excitement and also consternation among exuberant tech-bros, emoting politicians, conflicted creatives and the curious at large. Widespread anxieties about the impacts of this new wave of automation across society and economy failed to restrain an almost feverish interest. Tinkering and \u2018prompt-engineering\u2019 swiftly inducted recombinations of text and images into computational training routines. We critically probe the nexus between generative technologies such as ChatGPT (text-to-text) and Midjourney (text-to-image) and an emergent episteme figured around the movement of data.<\/p>\n<p><!--more-->The emergence of neural networks and deep learning techniques in the 21st century registers a step change, rather than departure point, in the genealogy of technological generativity. Needless to say, we find in the gravitation of attention to generative technologies a scale of extension into the realm of social and cultural production that warrants the claim that we are in the midst of an epistemic shift. Once any technological form is ubiquitous, conditions are established that instantiate new grammars of expression, new orderings of things, new technical ensembles that structure and organize the milieu of perception and cognition. Such a technological environment, we maintain, defines the conjunctural horizon of the future-present.<\/p>\n<p>We question why the terms \u2018generative\u2019 and \u2018generation\u2019 are appended to the algorithmic routines of the class of computational operations widely referred to as large language models (LLMs). The term \u2018prompt\u2019 might be another. First, though, \u2018generation\u2019: in the midst of the technical-epistemic ensemble of LLMs, it is hard not to be prompted to pursue a genealogy of generation. Is there a continuum that traffics with this term, carrying us from the industrial epoch of carbon capitalism to a Darwinian episteme of evolutionary species-being? Can we suppose, like Bateson, that there are \u2018patterns which connect\u2019?<a href=\"#_ftn1\" name=\"_ftnref1\"><sup>[1]<\/sup><\/a> And, if so, is this a computational condition or messianic order?<\/p>\n<p>The \u2018generative\u2019 turn in AI indexes a paradigm shift instituted by the massive scalar expansion in computational calculation, one that is contingent on the availability of vast quantities of training data but is now in the business of aggregating and enclosing the data commons. Generative AI is fuelling an extraordinary churn of novel \u2018synthetic\u2019 data streams, including but not limited to synthesized images, 3D models, texts, music, and short videos. Triggered by a short text prompt, DALL-E 2 generates realistic images in a software-specific aesthetic. Operating in a post-cybernetic idiom, the automated image combines distinct and unrelated objects in semantically calculable ways to assume the status of original.<a href=\"#_ftn2\" name=\"_ftnref2\"><sup>[2]<\/sup><\/a><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-538 size-large\" src=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/prompt-image-pairs-1-900x279.png\" alt=\"\" width=\"900\" height=\"279\" srcset=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/prompt-image-pairs-1-900x279.png 900w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/prompt-image-pairs-1-300x93.png 300w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/prompt-image-pairs-1-768x238.png 768w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/prompt-image-pairs-1-1536x476.png 1536w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/prompt-image-pairs-1-2048x635.png 2048w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p style=\"text-align: left;\">In a future draft of this paper, we will elaborate how this bears upon our argument that generative AI registers an emergent episteme. The political implications of this shift are not yet clear. What, for instance, is the status of the <em>labour theory of value<\/em> or, conversely, a <em>machine theory of labour<\/em> within such a paradigm? At a base level, there\u2019s an analytical imperative to better understand how circuits of data are constitutive of a larger social-technical episteme taking root within generative technologies.<\/p>\n<p>We propose, and will study further as we develop this text, that the history of generation as a term is coincident with machine procedures, from which we begin to discern an emergent episteme specific to generative technologies associated with LLMs. Within this technical episteme, we define movement and anti-aesthetics as a class of elements assigned to the training of AI models coextensive with the political economy of elastic computing able to scale client demands for transmission, storage and processing. We consider how this episteme manifests through our own experiments with text and image generation via ChatGPT and Midjourney. Again, in future iterations of this paper, we explore this computational condition in terms of a paradox of time peculiar to what Peter Osborne calls the \u2018disjunctive conjunction\u2019 of the contemporary.<a href=\"#_ftn3\" name=\"_ftnref3\">[3]<\/a><\/p>\n<h2>Analytical Currency of the Episteme<\/h2>\n<p>What is the significance of an episteme? Why might our critical attention be drawn to a constellation of technical rules, standards and procedures that define and organize grammars of expression? We note a tendency in recent years across media studies, STS and anthropology to make an assortment of statements and claims regarding current and emergent epistemic conditions, often aligned with a particular manifestation or practice of digital culture or society. But is the term epistemic or episteme doing anything more than servicing what might otherwise be understood as something more like \u2018knowledge production\u2019? In other words, what is the status and social-cultural implication of the epistemic as distinct from knowledge production? And what are we declaring, in this paper, in our claim of a new epistemic horizon attributed to the advent of generative technologies?<\/p>\n<p>In thinking the relation between technology and epistemology, German media theorist Friedrich Kittler\u2019s maxim remains provocative: \u2018Media determine our situation\u2019.<a href=\"#_ftn4\" name=\"_ftnref4\"><sup>[4]<\/sup><\/a> Media form the infrastructural condition, the quasi-transcendental architecture or media <em>a priori<\/em>, for experience and understanding.<\/p>\n<p>Within the analytical-political stakes pursued across the writings by Bernard Stiegler, the technology-epistemic couplet plays out in the first instance on the surface of cognition. For Stiegler, like Kittler, different technological forms organize cognitive processes in increasingly abstract ways across the iterative passage of historical time. Lydia Liu offers a perhaps unintentional synthesis of Kittler and Stiegler in her book <em>The Freudian Robot<\/em> where she writes: \u2018Images exist insofar as their media-habitats, ecosystems, and social practices exist and function to provide the structure of cognitive patterns for them\u2019.<a href=\"#_ftn5\" name=\"_ftnref5\"><sup>[5]<\/sup><\/a><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-524 size-large\" src=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/mathematical-gaze-900x362.png\" alt=\"\" width=\"900\" height=\"362\" srcset=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/mathematical-gaze-900x362.png 900w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/mathematical-gaze-300x121.png 300w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/mathematical-gaze-768x309.png 768w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/mathematical-gaze-1536x617.png 1536w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/mathematical-gaze-2048x823.png 2048w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>Our interest in this paper is to partition our attention, focussing on LLM\u2019s and their technical logics that calculate the generation of text and images. The suite of technologies within the family of LLMs are largely defined by neural network architectures that \u2018learn\u2019 embedded functions between data points and perform operations that synthesize data. This operative logic has strengthened AI\u2019s functionalities around pattern detection and identification, but also the generation of new data provided during training. It is precisely this capacity to generate new data from within the continuously defined data field of machine operations of LLMs that we find a basis to claim a post-cybernetic event horizon. In his critique of the pervasiveness of mechanistic analogies of the mind, Matteo Pasquinelli writes, \u2018In reality, cybernetics was not a science but a school of engineering in drag\u2019.<a href=\"#_ftn6\" name=\"_ftnref6\"><sup>[6]<\/sup><\/a> By extension, we can understand the post-cybernetic idiom of large language models and machine images as next level, beyond engineering in drag and instead immersed in full-frontal transition of self-organization with a mathematical gaze fixed on the mirror of auto-generation.<a href=\"#_ftn7\" name=\"_ftnref7\"><sup>[7]<\/sup><\/a><\/p>\n<h2>What do Machines Learn?<\/h2>\n<p>Machine learning describes a kind of \u2018generational\u2019 confrontation within AI itself \u2013 a step in a line of technological and algorithmic descent characterized by still-emerging computational \u2018learning\u2019 systems related to perceiving, inferring, and synthesizing. Machine learning is performed by vastly different ranges in complexity and specialization but can be difficult to differentiate within the general AI discourse. Machine learning and deep learning, for instance, are rarely distinguished from one another, though the latter is a subset of the former, intended to handle larger and more complex datasets using multiple (\u2018deep\u2019) layers of artificial neural networks. The computational architecture of deep learning is inspired by the structure and functioning cognitive apparatus of the human brain. In the case of the former, procedural layers are organized by interconnected nodes, or networks of artificial neurons, that process and transmit data.<\/p>\n<p>The large language models (LLMs) that have engulfed the AI discourse and imagination of the past 6\u201312 months are very large deep learning models whose learning processes are first tested and developed with pre-training on vast amounts of data.\u00a0 In both machine learning and deep learning, the training process can be supervised, unsupervised, and based on reinforcement. In the first case, <em>supervised learning<\/em>, the algorithm is trained to make predictions or classify new data based on a labeled dataset. In <em>unsupervised learning<\/em>, the algorithm receives input data and gradually learns to identify correlations, similarities, and differences among the data until it derives recurring patterns and structures. In the case of <em>reinforced learnin<\/em>g, the process is typically focused on a series of decisions and based on a reward system when the desired decision is taken.<\/p>\n<p>Within the process of LLM image and text generation, latent space, also known as embedding space, refers to a lower-dimensional, abstract representation of data that captures the underlying structure and variations in the original high-dimensional data space. This dimensionality reduction simplifies the modeling process and makes it more tractable, especially when dealing with complex and high-dimensional data. Latent space can be understood as a compressed, more organized spatial representation of the world (to a computer), where different data points with similar characteristics are located more closely to one another. A generative model learns to map data points from the latent space back to the original data space, generating new data instances that resemble those in the training dataset. The process of mapping data points from the latent space to the original data space is usually called generation, though it is also referred to as decoding.<\/p>\n<p>Text-to-image generation uses natural language processing (NLP), large language models (LLMs), and diffusion processing to produce digital images.<a href=\"#_ftn8\" name=\"_ftnref8\"><sup>[8]<\/sup><\/a> At the time of writing, more recent generative AIs rely on a \u2018stable\u2019 diffusion process, whereby a first phase is carried out by a transformer model, and consists of image encoding through a neural network pre-trained on a large-scale dataset containing billions of image-text pairs.<a href=\"#_ftn9\" name=\"_ftnref9\"><sup>[9]<\/sup><\/a> The training produces the embeddings within the latent space that are combined to form a joint representation of image and text, capturing the semantic \u2018meaning\u2019 of both. This process is central to image and text classification tasks and object detection, for example. In stable diffusion models, further passages convert a text prompt into text and image embeddings, which the diffusion neural network transforms into images.<\/p>\n<h2>Models of Memory, Engineering Experiments<\/h2>\n<p>Learning and memory are closely related concepts. We will assume in this meeting and with the constraints of time that there\u2019s no need here to revisit the long disciplinary histories and preoccupations, from cognitive to computer sciences, that trouble the possibility of speaking of human and machine memory in the same breath.\u00a0 If learning, at its core, is the acquisition of knowledge, memory might be understood as the <em>expression<\/em> of what\u2019s been acquired. LLMs, in this case, and more precisely their generation, present an interesting model \u2013 and modeling \u2013 of memory to the extent that they can be understood to express a form of knowledge production and retention. Such a model is not fixed in time at the instance of pre-training, but a mode of learning that we might extend to include the ongoing forms of re-training, fine-tuning, stuffing, conversational memory and other techniques and experiments of refinement that accumulate, process and analyze data. A core objective here is to adapt foundation models and hone more task-specific, descendent generative AI models and tools. In this regard, the model proliferates a family or class of variants within the parametric contours of the prompt.<\/p>\n<p>How can we make sense of DALL-E or Stable Diffusion image outputs as memory, or expression of a machine\u2019s learning predicated on retention? For one thing, even though we know that these generator machines can differ in how they capture and reproduce data distributions, organize latent space and in how they are trained in the procedures used for learning the mapping, we also know they operate using the same principle: make outputs that look like the training data. Insofar as their expressions can be recognised as distinct, novel, unique or original to any individual image of the training set, we can recognise such outputs as the work of an incalculable (to the human) schema of vectors internal to the operative logic of the machine.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-526 size-large\" src=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/diffusion-900x193.png\" alt=\"\" width=\"900\" height=\"193\" srcset=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/diffusion-900x193.png 900w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/diffusion-300x64.png 300w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/diffusion-768x164.png 768w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/diffusion-1536x329.png 1536w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/diffusion-2048x438.png 2048w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>So while we perhaps cannot decode the machine\u2019s knowledge from any given image output, in the way that a memory does not easily reveal <em>how<\/em> it came to be acquired, we can <em>know<\/em>, for lack of a better word, the experience of the machine. We <em>generate <\/em>its learning environment. And even though, in the strictest sense of its computational operations, a model can stop \u2018learning\u2019 once it is trained to the satisfaction of its human handlers, in practice we know these machines have been unleashed into the wild precisely because they are understood to be partial. Models like DALL-E 2 or GPT 4 are trained on vast, broad data using self-supervision at scale. They are understood as a \u2018foundation model\u2019, a concept or at least terminology coined by Rishi Bommasani and his massive team of co-authors to underscore their \u2018central yet incomplete character\u2019.<\/p>\n<p>For computer scientists, this \u2018incompleteness\u2019 can be understood as central to the model\u2019s emergent capabilities and properties when brought to bear on a seemingly endless set of future tasks, at scale.<a href=\"#_ftn10\" name=\"_ftnref10\"><sup>[10]<\/sup><\/a> They seek to be \u2018upgraded\u2019 to \u2018expert\u2019 or niche models to demonstrate a \u2018sharper\u2019 knowledge of specific topics. They require users to manipulate and experiment with their \u2018sensorial\u2019 environment via new and imaginative techniques of engineering, in order to adapt them for a wide range of downstream tasks.<\/p>\n<h2>From Kinaesthetics to Anti-Aesthetics, or, the Calculation of Cognition<\/h2>\n<p>The proliferation of automated texts and images generated by large language models (LLMs) is accompanied by a subsequent depletion of sensation, registering the centrality of an anti-aesthetics specific to this machinic episteme. To the extent we may conceive this relation as a computational variation of kinaesthetics, the experience of sensation is sublimated into a form of neural hyperstimulation galvanized by a kind of libidinal drive not so different from that of the addict, frantic in their search for the next hit. As much as we might indulge in poetic speculations figured around the technics of sensation, our focus instead is on how an anti-aesthetics pervades the technically-driven machine operations and infrastructural facilities that utilize neural networks to detect patterns and structures within existing data. New grammars of expression are organized computationally in the form of nascent typologies.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-527 size-large\" src=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Midjourney-variation-900x515.png\" alt=\"\" width=\"900\" height=\"515\" srcset=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Midjourney-variation-900x515.png 900w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Midjourney-variation-300x172.png 300w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Midjourney-variation-768x440.png 768w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Midjourney-variation-1536x880.png 1536w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Midjourney-variation-2048x1173.png 2048w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>The term \u2018anti-aesthetics\u2019 recalls, as many of you will remember, one of the anchor points that populated the delightful delirium of the eighties and early nineties when postmodernism trafficked across the horizon of cultural production. Here, we are thinking of Hal Foster\u2019s edited collection, <em>The Anti-Aesthetic: Essays on Postmodern Culture<\/em>, published by Bay Press in 1983.<a href=\"#_ftn11\" name=\"_ftnref11\"><sup>[11]<\/sup><\/a> As it turned out, the title had more to say about the anti-aesthetic than the book\u2019s contents. But dancing with the surface of meaning never troubled postmodern purists. So what bearing might the anti-aesthetic have on the pattern recognition attributed to deep learning?<\/p>\n<p>In his preface, Hal Foster does actually make a few framing remarks on the anti-aesthetic. He writes:<\/p>\n<blockquote>\n<p>\u2026 anti-aesthetic is the sign not of modern nihilism \u2013\u00a0which so often transgressed the law only to confirm it \u2013 but rather a critique which destructures the order of representations in order to reinscribe them.<\/p>\n<p>\u2018Anti-aesthetic\u2019 also signals that the very notion of the aesthetic, its network of ideas, is in question here: the idea that aesthetic experience exists apart, without \u2018purpose\u2019, all but beyond history, or that art can now effect a world at once (inter)subjective, concrete and universal \u2013 a symbolic totality. Like \u2018postmodernism\u2019, then, \u2018anti-aesthetic\u2019 marks a cultural position on the present: are categories afforded by the aesthetic still valid? (xv)<\/p>\n<\/blockquote>\n<p>A few of the key categories we might associate with the aesthetic would include terms such as beauty, sublime, autonomy, transcendent, transgressive, even \u2018resistance\u2019 \u2013 a key \u2018strategy of interference\u2019 Foster invokes in setting out the case for postmodern culture. None of these terms would seem to apply when transposed to machine generated images, though arguably there is a certain beauty to Midjourney filters that cast a pharmacological glaze over the screen of the world. Other, more interesting, aesthetic categories such as sense and sensation are more pregnant with possibility when thinking the logic of machine images. But only insofar as this is a negative operation of depletion, of numb saturation that crushes synaptic circuits in the brain as neural networks calculate correspondence within iterative procedures of accumulation spat out as singular renditions of the computational aesthetic designed to add incremental variation to baseline data sets.<\/p>\n<h2>Translating Kinaesthesis<\/h2>\n<p>If kinaesthetics is a study of body motion and self-perception in relation to one\u2019s own body, maybe the computational processes and outputs of large language models most resemble a kind of kinaesthesis, or muscle-memory, rather than a form of cognition? Data are synthetically fashioned from computational procedures of recursivity and recombination to generate iterative outputs. This is what we understand as a post-cyberentic operation, where externalities of noise and feedback fade away within a system that is internally generative. There is no cognitive determination going on here, as Katherine Hayles, Bernard Stiegler and others have examined at length.<a href=\"#_ftn12\" name=\"_ftnref12\"><sup>[12]<\/sup><\/a><\/p>\n<p>Machine deep learning is not analogous to cognitive processes of perception and deduction. To presume so is a problem of \u2018epistemic translation\u2019, as Matteo Pasquinelli points out in his recent book, <em>The Eye of the Master: A Social History of Artificial Intelligence<\/em>.<a href=\"#_ftn13\" name=\"_ftnref13\"><sup>[13]<\/sup><\/a> We might settle with the idea of a new class of nonconscious cognition (Hayles), but such a hypothesis doesn\u2019t sufficiently probe how the proliferation of machine images contour the landscape of human perception and service the world\u2019s repository of images that comprise the endless repertoire of aesthetic expression \u2013 even if the balance of scales tips toward the blunt coldness of the computational anti-aesthetic.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-528 size-large\" src=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Zoe-Ned-2-900x490.png\" alt=\"\" width=\"900\" height=\"490\" srcset=\"https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Zoe-Ned-2-900x490.png 900w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Zoe-Ned-2-300x163.png 300w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Zoe-Ned-2-768x418.png 768w, https:\/\/nedrossiter.org\/wp-content\/uploads\/2023\/11\/Zoe-Ned-2.png 1414w\" sizes=\"auto, (max-width: 900px) 100vw, 900px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\"><\/a><\/p>\n<hr \/>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\"><\/a>Paper presented at <em>Exo-Mnemonics: Memory, Media, Machines<\/em>, Symposium, Western Sydney University, 6\u20137 November 2023.<\/p>\n<p><sup>[1]<\/sup> Gregory Bateson, <em>Mind and Nature: A Necessary Unity<\/em> (Cresskill, New Jersey: Hampton Press, 1979), 7\u20138.<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\"><sup>[2]<\/sup><\/a> The automated image can also perform image manipulation and interpolation with existing images.<\/p>\n<p><a href=\"#_ftnref3\" name=\"_ftn3\">[3]<\/a> Peter Osborne, \u2018Working the Contemporary: History as a Project of Crisis Today\u2019. In <em>Crisis as Form<\/em> (London and New York: Verso, 2022), 3\u201317.<\/p>\n<p><a href=\"#_ftnref4\" name=\"_ftn4\"><sup>[4]<\/sup><\/a> Friedrich A. Kittler, <em>Gramophone, Film, Typewriter<\/em>, trans. Geoffrey Winthrop-Young and Michael Wutz (Stanford: Stanford University Press, 1999), xxxix.<\/p>\n<p><a href=\"#_ftnref5\" name=\"_ftn5\"><sup>[5]<\/sup><\/a> Lydia H. Liu,<em> The Freudian Robot: Digital Media and the Future of the Unconscious<\/em> (Chicago and London: University of Chicago Press, 2010), 219.<\/p>\n<p><a href=\"#_ftnref6\" name=\"_ftn6\"><sup>[6]<\/sup><\/a> Matteo Pasquinelli, <em>The Eye of the Master: A Social History of Artificial Intelligence<\/em> (London: Verso, 2023), 152.<\/p>\n<p><a href=\"#_ftnref7\" name=\"_ftn7\"><sup>[7]<\/sup><\/a> See also Jean Baudrillard, <em>The Mirror of Production<\/em>, trans. Mark Poster (St. Louis: Telos Press, 1975).<\/p>\n<p><a href=\"#_ftnref8\" name=\"_ftn8\"><sup>[8]<\/sup><\/a> Generative adversarial networks (GANs) are, generally speaking, a predecessor of diffusion models.<\/p>\n<p><a href=\"#_ftnref9\" name=\"_ftn9\"><sup>[9]<\/sup><\/a> The first release of Dall-E, for example, was optimized for image production using a trimmed-down version of the GPT-3 LLM, employing 12 billion parameters (instead of GPT-3\u2019s full 175 billion) trained on a dataset of 250 billion image-text pairs.<\/p>\n<p><a href=\"#_ftnref10\" name=\"_ftn10\"><sup>[10]<\/sup><\/a> Rishi Bommasani et al., \u2018On the Opportunities and Risks of Foundation Models\u2019, Center for Research on Foundation Models (CRFM), Stanford Institute for Human-Centered Artificial Intelligence (HAI)<\/p>\n<p>Stanford University, 2022, <a href=\"https:\/\/arxiv.org\/pdf\/2108.07258.pdf?utm_source=morning_brew\">https:\/\/arxiv.org\/pdf\/2108.07258.pdf?utm_source=morning_brew<\/a><\/p>\n<p><a href=\"#_ftnref11\" name=\"_ftn11\"><sup>[11]<\/sup><\/a> Hal Foster, <em>The Anti-Aesthetic: Essays on Postmodern Culture<\/em> (Seattle: Bay Press, 1983).<\/p>\n<p><a href=\"#_ftnref12\" name=\"_ftn12\"><sup>[12]<\/sup><\/a> N. Katherine Hayles, <em>Unthought: The Power of the Cognitive Nonconscious<\/em> (Chicago and London: University of Chicago Press, 2017). Stiegler considers this issue as a social crisis on a mass, trans-generational scale across many of his works. See, for instance, Bernard Stiegler, <em>Taking Care of Youth and the Generations<\/em>, trans. Stephen Barker (Stanford: Stanford University Press, 2010) and Bernard Stiegler, <em>The Age of Disruption: Technology and Madness in Computational Capitalism<\/em>, trans. Daniel Ross (London: Polity, 2021).<\/p>\n<p><a href=\"#_ftnref13\" name=\"_ftn13\"><sup>[13]<\/sup><\/a> Pasquinelli, 153.<\/p>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Zoe Horn and Ned Rossiter &nbsp; \u201cImages exist insofar as their media-habitats, ecosystems, and social practices exist and function to provide the structure of cognitive patterns for them.\u201d Lydia H. Liu, The Freudian Robot: Digital Media and the Future of the Unconscious (Chicago and London: University of Chicago Press, 2010), 219. The public debut of [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-550","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/nedrossiter.org\/index.php?rest_route=\/wp\/v2\/posts\/550","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nedrossiter.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nedrossiter.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nedrossiter.org\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/nedrossiter.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=550"}],"version-history":[{"count":13,"href":"https:\/\/nedrossiter.org\/index.php?rest_route=\/wp\/v2\/posts\/550\/revisions"}],"predecessor-version":[{"id":564,"href":"https:\/\/nedrossiter.org\/index.php?rest_route=\/wp\/v2\/posts\/550\/revisions\/564"}],"wp:attachment":[{"href":"https:\/\/nedrossiter.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=550"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nedrossiter.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=550"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nedrossiter.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=550"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}