Art Art History Artificial Intelligence Crypto Philosophy

AI Art, Ownership, Blockchain

Questions of “ownership” in art can be a matter of law, of social norms, or of art theory. New art forms and new methods of producing art can fall foul of existing answers to these questions or creatively re-open them. Often they do both. “AI Art” produced using contemporary “Artificial Intelligence” artificial neural network software is a good example of this. “Rare Art” produced using blockchain token software is another, which we will consider below in relation to one particularly notorious example of AI Art.

“Portrait of Edmond Belamy” was produced by the artist group “Obvious” using an existing artificial neural network model trained on a corpus of images of classical paintings. Obvious did not credit the author of that network model, or any of the artists whose paintings were included in the corpus. At this point there are already three different layers of questions around ownership.

Firstly, the assembly of an image corpus. Accurate reproductions of paintings that are no longer in copyright should not attract copyright, and in the US at least this is quite rightly the case. A collection of such reproductions may attract copyright on the collection itself, but this should not affect individual works within the collection. If the images were of paintings that are still under copyright, copying each image might infringe that copyright. I say “might” because doing so might fall under fair use/fair dealing (hereafter just “fair use”) exceptions to copyright. These exceptions are popular both with artists who work with appropriation and with large Internet companies who work with search and advertising. Both groups, and others such as Digital Humanities scholars, might wish to assemble such corpora of images so it is difficult to generalise about motives and outcomes regarding them. In the case of art, however, fair use for artists is a key defence of artistic creativity in an age where the visual environment is dominated by corporate media.

Beyond this legal view is the ethical and art theoretic view. Is it right to treat individual artworks in a corpus as tokens of a type or as just part of a set, as fodder or as raw material for an industrial process? With apologies to Clement Greenberg, does discarding the tactile elements of painting still meaningfully capture it, and does discarding visual detail and differences in scale discard more for processes of derivation than processes of study?

Secondly, the training and use of the artificial neural network model on that image corpus. The model is trained by processing images in the corpus, by copying and reading their data. The model will contain representations of parts of the images from its training corpus, and its output will also resemble parts of the images they are trained on. Mechanical copying and creating derivatives of images are covered by copyright. Cutting up images and juxtaposing them with the work of other artists is covered by the moral rights that accompany copyright. Again, copyright does not apply to works that are out of copyright (moral rights vary by country…), and artists should have a claim to fair use of such materials. The degree to which artistic use of source materials transforms them should be a factor in establishing that such use is indeed fair use, and the output of artificial neural networks certainly transforms the images that they are trained on. Style is not copyrightable (let’s not talk about “trade dress” here), but forgery and “passing off” can be legal matters, and the application an artist’s signature style to a work that they have not made but is sold under there name is the same whether performed by human hand or algorithm.

Again, the ethical and art theoretic view raises more questions. Signature styles are a matter of pride as well as profit for artists, and while this can be critiqued within art theory it is a strongly established norm that simple imitation of style, without a critical framework for doing so, is a breach of artistic norms. Artificial neural network models need not operationalise an individual artist’s signature style in order to devalue the concept of signature styles in general.

Thirdly, Obvious’s use of existing neural network software to generate an image has caused widespread debate. “Signing” the image with the algorithm used by the artificial neural network software to produce it functions as a double-bluff whatever the intention behind doing so. We know that Obvious produced and sold the image, their attribution is not threatened by this. But it erases the work of both Robbi Barret in producing the model of art that the image is simply a product of and of the artists that the neural network’s model already erases the authorship of, both in terms of attribution and in terms of control of their work (even if from beyond the grave in the case of the corpus artists). Software authors should not be able to control uses of the tools that they produce – Microsoft should not be able to censor your writing using Word or claim joint authorship of everything you write using it. But a trained artificial neural network model is a more complex thing than a text editor from this point of view – it is as much content as it is tool. Microsoft should not be able to tell you how you change the contents of an empty text document, but changing a novel or a painting whether represented physically or digitally may infringe on the copyright and moral rights that it may have. Again fair use should be strongly considered for artistically transformative use of artificial neural network models.

Art theroretically, such direct and uncredited use of existing materials, even materials created by another artist, may count as appropriation art, which is an established category within the arts. Appropriation art is deliberately transgressive, often for critical effect. Appropriating non-art or low-art materials is very different from appropriating canonical art or the art of leading contemporaries but both can be critical moves. Artistic labour can be appropriated directly, in the case of contemporary artists who use studio assistants to produce art under their own signature such as Jeff Koons or Damien Hirst. The signature that the products of this labour are exhibited and sold under is a key part of its erasure. And where software used to make art is free software(/open source), attribution may or may not be a strong social norm but past a certain point that attribution is useful information to have for artistic, critical, and art historical engagement with the work.

Prior to this, Obvious had already encountered the question of ownership and found an answer that led directly to “Portrait of Edmond Belamy” being sold at auction. That answer was based on AI’s twin in contemporary technological hype, the blockchain.

Christie’s discovered Obvious via their work on Superrare, a blockchain-based “Rare Art” platform. Rare Art is named after the “Rare Pepe” project that developed the techniques of using cryptocurrency and blockchain token technology to record limited edition certificates for digital images. This produces “artificial scarcity” and allows a form of ownership for pieces of digital art art that would otherwise be infinitely reproducible. This use of certificates as ownership proxies for art was pioneered by conceptual art. Compared to a flammable piece of paper with a handwritten signature on, the authenticity of a blockchain transaction secured by a not inconsiderable fraction of the world’s computing power each day only increases over time. It may not be entirely clear what the authenticity is of, but the terms and conditions of Rare Art platforms and the community norms of their users and consumers do produce a vivid image of a novel and very strong concept of ownership.

“True digital ownership” on a blockchain secured by cryptographic keys is seen by its proponents as stronger, more trustworthy, and more absolute than previous conceptions of property. This makes AI art a natural fit for Rare Art because each has needs that the other fulfills: ownership in the case of the products of AI art, strongly perceptible uniqueness but also recognisability as art in the case of Rare Art. Sale at auction also provides this kind of closure for the financial value of art, but new art and in particular digital art faces a bootstrapping problem in which it must establish its value in order to be sold at auction but cannot be sold at auction without first establishing its value. Christie’s saw art by Obvious selling on Superrare and could react to that market signal more quickly and with lower risk than with signals from gallery or online sales of physical goods.

It is a truism of International Art English that art questions things. There are many questions in play in both AI Art and Rare Art. They involve the concept of ownership considered in terms of the law, of social norms, and of art theory. The answers to these questions from within each of these realms individually may be obvious and simple to their practitioners, but between them they may be more at odds than each realises. Negotiating this without closing the door to cultural creativity or opening it to corporate exploitation is a task that is of interest far beyond the artworld.

(I am not a lawyer, etc.)


It All Sounds The Same

In the early 1990s, on a show called “A Stab In The Dark” that was a disastrous attempt to revive the TW3 format, the comedian David Baddiel demanding that audience members name random acid house tracks played over the studio PA. One embarrassed young man eventually helped Baddiel out by admitting that he couldn’t.

“It all sounds the same to me” is a dismissive and often reactionary comment. But it is also a judgement and an account of identity or rather of its lack. It renders something indifferent.

In a review of an album by the electronic music outfit Autechre on The Quietus website, reviewer Charlie Frame was faced with the opposite problem when they wrote:

It would now take a machine with a capacity and patience far exceeding that of any mortal being to keep track of their increasingly arcane song-titles alone, which are deliberately alienating in their anonymity, as though they’d been randomly selected from sections of a printer test page. I’d wager Autechre themselves have trouble differentiating between their ‘Chenc9-1Dub’s and their ‘Nth Dafusederb’s…

Whether Autechre or acid house, and whatever you call it, electronic music is clearly different from, say, Shostakovich. And the first and second Autechre tracks, and the two acid house tracks, will have differences when played one after the other. You cannot identify precisely which track is which compared to the other if you are just dropped into them midway through, you may not be able to find them afterwards, you certainly won’t be able to name them, but when faced with them you would be able to tell what is different about them, even if only that they do not occupy the same moment.

You can also tell what is different between two tracks by Autechre and two classic acid house ones. You don’t even need to know that they are Autechre or acid house. Each track is different from the other in the pair, and the differences between each track in each pair are different from the differences of the other pair. If not in their immediate sound then in their production or some other property. It’s the same with the music events that these tracks were and are played at. Each event in a series of events is different from the others, and each series of events also has different differences from the other series.

This is Deleuze’s “Difference and Repetition”. Differences, differences between series, differences of differences, and repetitions that make the differences. I have named the things that are different here, but if we remove the names the point stand. It is not removing the names that removes the identities. Rather it is recognizing that the identities are neither necessary nor sufficient to identify what is named here.

If you don’t believe me, just listen to Autechre on shuffle. 😉

Crypto Philosophy

Why Bitcoin is Money According To Marx

tl;dr: whales.

In “Marx on Money“, Suzanne de Brunhoff describes the theory of money that Karl Marx presents in “Capital”.

Money, for Marx, emerges in three stages prior to capitalism.

In the first stage, gold becomes a measure of the value of all other commodities rather than simply one commodity among many.

In the second stage, gold coins become the medium of circulation. Once gold becomes de-materialized in this way its role can then be occupied by fiat currency.

In the third and final stage, the emergence of hoarding paradoxically introduces money “proper”.

de Brunhoff writes that “Hoarding is a demand for money as money…”, an interruption in the circulation of money that “…serves to ceaselessly preserve and reconstitute the money form as such, whatever the deformations, transformations, and disappearances it undergoes as a result of the other two functions. Produced by these, it becomes in its turn a condition of their functioning.”

In the crypto space, hoarders are known as whales. They remove their coins from circulation with the expectation that this will ultimately provide them with more utility than immediately spending them. In this they act just like hoarders of gold coin or fiat currency. To quote Marx, “The money becomes petrified into a hoard, and the seller becomes a hoarder of money.”

Whales therefore establish that cryptocurrency is money according to Karl Marx.

Crypto Hyperstition Philosophy

Hash Gematria

Gematria in Hebrew uses a SIGINT attack on God’s fully homomorphic encryption of the book of nature to extract meaning. A non-Hebrew gematria is a glimpse not back into the mind of God but forward through the fall of the tower of Babel into a scrambled linguistic world of contingency. It is a generator of Deleuzean “dark precursors” to connections between concepts, just as rhymes are. These connections are useful irritants, spurs to the generation of actual structure that would otherwise not occur, anchors for beliefs. Both kinds of gematria are exercises in exploiting the surplus value of code. The former is revelation, the latter is construction. Yet each resembles the other as much as is possible in their respective universes.

Interpreting letters as numbers recapitulates the history of mathematical notation. This is a defensible choice based on its maximal simplicity and its historical embedding. Cryptographic hashing lacks both this simplicity (for a human being to calculate the value of a word numerically takes seconds, for them to calculate a cryptographic hash by hand would take around 15 minutes) and this history (cryptographic hashes date back only to the 1970s).

It does however compress a history of ever increasing uniqueness and thereby security (in the senses of both secrecy and stability) of identity. It is part of the present moment of the history of technocapital/techonomics rather than a form of nostalgia for the past of accounting and its gentrified forms (or “mathematics”). As the number of cryptographic hashes calculated by the Bitcoin network alone approaches 120,000,000,000,000,000,000 per second at the start of 2020, the process of hashing reinforces its reality and its effects in the world through sheer volume of repetition.

Cryptographic hash collisions (where hashing two different pieces of data generate the same hash value identifier or “name”) are vanishingly unlikely by design. The evolution of cryptographic hashing algorithms has been the evolution of ever more effective ways of scattering bits into cryptographic space to destroy their significance while retaining their identity. But the 64-character hexadecimal (base-16) strings used to encode 256-bit hash values are difficult for human beings to read and compare, so software systems that use them pervasively such as “git” or “Docker” truncate them by displaying or reading just the first few characters (the “prefix”) in order to make them more readable.

These shorter values do collide as more and more hashes are used to refer to more and more things in the world (this is the “Birthday Problem”) and so longer prefixes have been used over time. It takes two hexadecimal characters to encode one eight-bit byte of data. The first byte of the hash value is 2 hexadecimal characters, the first two are four characters, the first four are eight characters etc. We can represent hash prefixes in more exotic bases: Proquint, BIP-49, Urbit @p, Base 56, Bech32 or even decimal. But these are not the hash values that are displayed pervasively within the culture of computing.

Non-cryptographic hashes will collide far more frequently but they are not embedded in the same way within that culture or in the culture of technocapital and its imaginaries of resistance (crypto-anarchy, cryptocurrency) as cryptographic hashing algorithms are. We therefore mean cryptographic hashes when we refer to hashes here.

Replacing letter-value summing and decimal reduction with cryptographic hash prefix collision generation gives us hash gematria. This is a hype-cycle peak-shift maximally historically contingent and embedded extractor of surplus value of code. This surplus value will itself be maximally historically contingent and embedded dark precursors. As a strategy this is inflected by the pop cultural strains of Chaos Magick but has far greater qualities of repetition and embeddedness and is more abstract and therefore more dynamic than a specific cultural expression.

If this seems unconvincing, what would a stronger candidate be?

What for?

To get the first four characters of a cryptographic hash of a piece of text using the Unix command line, enter something like the following:

echo -n "egress" | sha256sum | cut -c -4

Accelerationism Aesthetics Art Philosophy Projects Satire

Upload Update

Like the narrator of William Gibson’s short story “The Winter Market”, I don’t think that mind uploads are the person whose brain they destroy. I’m not even sure that a living brain is the same person each day, or from moment to moment, but reassembling a similar pattern on the same substrate at least looks like continuity. Whether the Ship Of Theseus is the ship that Theseus’s sailed or not, a copy built next to it all in one go probably isn’t. But if the Ship Of Theseus burns, that copy is more like it than anything else that exists. Where the resemblance is many billions of bits strong, and there is no stronger resemblance extant, that’s a form of continuity of identity. Hopefully that of a portrait that captures the sitter’s personality rather than a vampire child.

The only fully uploaded neural connectome is that of the tiny C. elegans nematode worm. Not any particular worm, the worm as an organism. So there is no single identity for the upload to continue or to not continue. The connectome been downloaded into wheeled robots, where it bumbles around in a wormy manner. I’m working on using it to control the pen in a version of draw-something. It’s a different kind of neural art. Nematodes probably don’t have subjectivity, so hopefully this isn’t cruel. I don’t want to be the worm-torturing version of Roko’s Basilisk.

What if we are the worms in someone else’s art project, though? If the universe isn’t a simulation but rather an artwork this would render conceptual art nomination a priori correct and give human suffering the moral quality of crimes committed in the name of making art that do not pay for themselves with the resultant aesthetic achievement.

Neal Stephenson’s mind uploading novel “Fall, Or Dodge In Hell” deals in the ethics and aesthetics of mind uploading and its worlds. Less simulation, more simulacra. Reading it and encountering an uptick in transhumanist themes online and in meatspace has encouraged me to revisit my low-resolution “Uploads” project to make it very slightly higher resolution. I’m porting it to Kinect 2, improving its performance, and looking at better EEG options.

Following the themes of “Fall”, the uploads need a world to live in. At present they implicitly live through, but not on, Twitter. Maybe they can inhabit a simple VR environment. They also need to communicate with each other. Sad and other predetermined emotional reacts only, though. As local disk-based blobs of data they are in danger of being ephemeral. Content-addressable storage (IPFS) can help with that.

Blockchain security and permanence can evocatively address all of this as well – there are blockchain VR environments, communication systems, and data storage systems. There’s a fear of loss behind both mind uploading and blockchain systems. Finn Brunton’s excellent book “Digital Cash” draws out some more direct historical connections between the two.

But that’s another story.

Crypto Philosophy Uncategorized


A “bit” is a basic unit of information entropy. It’s binary, either on or off, present or absent, one or zero.

A “string” in computer programming is a sequence of items of a particular length. They may be fixed or variable length. Eight, sixteen, thirty-two and sixty-four bit numbers are fixed length. A text string is variable length.

A byte is a series of eight bits that’s used as a standard representation for typographic characters, colour values and many other things. Up until IBM’s OS/360 project in the late 1960s there was no real standard for this – computers might be decimal, or alphabetic, or have “words” of sizes from four to twenty-four bits. Some Soviet computers of the same period used ternary logic rather than binary. Alan Turing used a logarithmic measure of information entropy called a “ban“. So be wary of naturalising the bit and the eight-bit byte, but when you see bits grouped together in strings of lengths that divide neatly into eight, recognise that this is related to the reality of how most modern computer sytems divide up their memory.)

Bitstrings can be used to represent the presence or absence of properties. A fixed-length bitstring is a bitfield, but we’re going to stick with the more general name. Integer numbers represented in binary use bits to represent the presence or absence of quantities of increasing sizes within the number. 0110 is six in a four bit “nibble”. UNIX filesystems represent the permissions that the owner and other users of a file have to access and manipulate it as a sequence of bits.

Such bitfields can be found throughout computing. The satirical proposal for an “evil bit” to be set on Internet messages that have evil intent, shows both the prevalence of bitstrings and their users awareness of the limitations of binary thinking and computational representation.

As with their use to represent integer numbers using binary, bits can represent doubling or halving of quantities. It takes 33 bits of entropy to uniquely identify an individual among seven billion on Earth. Cryptographic hashes, which produce compact unique “names” for any input file of any length, often output 128, 160 or 256 bit values. Each bit doubles the possible size, quantity, or uniqueness of the thing it represents. It also doubles the size of the space in which it can hide.

Contemporary cryptographic encoding and signing systems use keys several thousand bits in length. They would take a conventional computer an infeasable amount of time to break. This property is used in Bitcoin mining to create cryptographic puzzles that require capital outlay to solve.

A proposal for “vectored signatures” for the “V” version control system uses features of these different strings of bits. It represents assertions about an individual’s relationship to and opinion of a piece of code using a bitstring. It asserts the identity of that individual using cryptographic signatures. This combination is a generalization of cryptographic “keysigning” as recognition of identity, and the fact that Bitcoin transactions involve cryptographic signatures of communications between individuals about single-dimensional (monetary) quantities.

The bitstring representation of logical operators developed by the Logical Geometry project provides a compact and information-rich notation for various logics. Each bit represents a fact about an operator such as “true in all possible worlds”, and relates to geometric and trellis representations of the same operators. Bitwise operations on these representations are meaningful – for example bitwise NOT on p (1100) gives ¬p (0011).

The combination of logically manipulable bitstring representations (as with Logical Geometry) asserted through cryptographic signatures (as with vectored signatures) seems like a possibly fruitful area of investigation.


Causal Horns

Gabriel's Horn
3D illustration of Gabriel’s horn by RokerHRO – Public Domain.

Causal horns are the equivalent of light cones for the potential effect of an event on other events over time – promoting or inhibiting the probability of particular range of events.

They’re horns because they’re twistier than generalized cones (although they compose similarly toDavid Marr’s use of them). Anything from a simple cone via hooks and bulbs to corkscrews and nautilus shells.

A causal horn fits within slices of the light cones of a succession of information transmissions. It’s more information rich, slower, and multidirectional in comparison.