Categories
Art Art Computing Projects

Small Sensoria 2

bluetooth small sensorium
That’s a Bluetooth wireless small sensorium. The code to support this is a bit hacky as rxtx doesn’t seem to want to play with Bluetooth serial ports, but it works.

The code is in the repository:

https://gitorious.org/robmyers/small-sensoria

Next I need to wired up the LEDs directly.

Then a WiFi Arduino that could work with Thingspeak directly. And multiple sensoria could interact.

Categories
Art Art Computing Free Software Generative Art Projects

Small Sensoria 1

The electronics

This is the test setup for “Small Sensoria”. It is several LEDs being (mis-used) as light sensors connected to an Arduino. The USB cable connects them to a computer running a Processing sketch that renders the light intensities.

The values can be plotted linearly:

linearOr radially:
radialNext I am going to make the Arduino unit more independent by adding a Bluetooth shield, battery power, and wiring the LEDs up to it.

You can get the Arduino and Processing source code here:

https://gitorious.org/robmyers/small-sensoria

Categories
Aesthetics Art Art Computing Art History Free Culture Free Software Generative Art Howto Projects Satire

Psychogeodata (3/3)

cemetary random walk

The examples of Psychogeodata given so far have used properties of the geodata graph and of street names to guide generation of Dérive. There are many more ways that Psychogeodata can be processed, some as simple as those already discussed, some much more complex.

General Strategies

There are some general strategies that most of the following techniques can be used as part of.

  • Joining the two highest or lowest examples of a particular measure.

  • Joining the longest run of the highest or lowest examples of a particular measure.

  • Joining a series of destination waypoints chosen using a particular measure.

The paths constructed using these strategies can also be forced to be non-intersecting, and/or the waypoints re-ordered to find the shortest journey between them.

Mathematics

Other mathematical properties of graphs can produce interesting walks. The length of edges or ways can be used to find sequences of long or short distances.

Machine learning techniques, such as clustering, can arrange nodes spatially or semantically.

Simple left/right choices and fixed or varying degrees can create zig-zag or spiral paths for set distances or until the path self-intersects.

Map Properties

Find long or short street names or street names with the most or fewest words or syllables and find runs of them or use them as waypoints.

Find all the street names on a particular theme (colours, saints’ names, trees) and use them as waypoints to be joined in a walk.

Streets that are particularly straight or crooked can be joined to create rough or smooth paths to follow.

If height information can be added to the geodata graph, node elevation can be used as a property for routing. Join high and low points, flow downhill like water, or find the longest runs of valleys or ridges.

Information about Named entities extracted from street, location and district names from services such as DBPedia or Freebase and used to connect them. Dates, historical locations, historical facts, biographical or scientific information and other properties are available from such services in a machine-readable form.

Routing between peaks and troughs in sociological information such as population, demographics, crime occurrence, ploitical affiliation, property prices can produce a journey through the social landscape.

Locations of Interest

Points of interest in OpenStreetMap’s data are represented by nodes tagged as “historic”, “amenity”, “leisure”, etc. . It is trivial to find these nodes to use as destinations for walks across the geodata graph. They can then be grouped and used as waypoints in a route that will visit every coffee shop in a town, or one of each kind of amenity in alphabetical order, in an open or closed path for example. Making a journey joining each location with a central base will produce a star shape.

Places of worship (or former Woolworth stores can be used to find https://en.wikipedia.org/wiki/Ley_line using linear regression or the techniques discussed below in “Geometry and Computer Graphics”.

Semantics

The words of poems or song lyrics (less stopwords), matched either directly or through hypernyms using Wordnet, can be searched for in street and location names to use as waypoints in a path. Likewise named entities extracted from stories, news items and historical accounts.

More abstract narratives can be constructed using concepts from The Hero’s Journey.

Nodes found using any other technique can be grouped or sequenced semantically as waypoints using Wordnet hypernym matching.

Isomorphism

Renamed Tube maps, and journeys through one city navigated using a map of another, are examples of using isomorphism in Psychogeography.

Entire city graphs are very unlikely to be isomorphic, and the routes between famous locations will contain only a few streets anyway, so sub-graphs are both easier and more useful for matching. Better geographic correlations between locations can be made by scoring possible matches using the lengths of ways and the angles of junctions. Match accuracy can be varied by changing the tolerances used when scoring.

Simple isomorphism checking can be performed using The NetworkX library’s functions . Projecting points from a subgraph onto a target graph then brute-force searching for matches by varying the matrix used in the projection and scoring each attempt based on how closely the points match . Or Isomorphisms can be bred using genetic algorithms, using degree of isomorphism as the fitness function and proposed subgraphs as the population.

The Social Graph

Another key contemporary application of graph theory is Social Network Analysis. The techniques and tools from both the social science and web 2.0 can be applied directly to geodata graphs.

Or the graphs of people’s social relationships from Facebook, Twitter and other services can mapped onto their local geodata graph using the techniques from “Isomorphism” above, projecting their social space onto their geographic space for them to explore and experience anew.

Geometry and Computer Graphics

Computer geometry and computer graphics or computer vision techniques can be used on the nodes and edges of geodata to find forms.

Shapes can be matched by using them to cull nodes using an insideness test or to find the nearest points to the lines of the shape. Or line/edge intersection can be used. Such matching can be made fuzzy or accurate using the matching techniques in “Isomorphism”.

Simple geometric forms can be found – triangles, squares and quadrilaterals, stars. Cycle bases may be a good source of these. Simple shapes can be found – smiley faces, house shapes, arrows, magical symbols. Sequences of such forms can be joined based on their mathematical properties or on semantics.

For more complex forms, face recognition, object recognition, or OCR algorithms can be used on nodes or edges to find shapes and sequences of shapes.

Classic computer graphics methods such as L-sytems, turtle graphics, Conway’s Game of Life, or Voronoi diagrams can be applied to the Geodata graph in order to produce paths to follow.

Geometric animations or tweens created on or mapped onto the geodata graph can be walked on successive days.

Lived Experience

GPS traces generated by an individual or group can be used to create new journeys relating to personal or shared history and experience. So can individual or shared checkins from social networking services. Passenger level information for mass transport services is the equivalent for stations or airports.

Data streams of personal behaviour such as scrobbles, purchase histories, and tweets can be fetched and processed semantically in order to map them onto geodata. This overlaps with “Isomorphism”, “Semantics”, and “The Social Graph” above.

Sensor Data

Temperature, brightness, sound level, radio wave, radiation, gravity and entropy levels can all be measured or logged and used as weights for pathfinding. Ths brings Psychogeodata into the realm of Psychogeophysics.

Conclusion

This series of posts has made the case for the concept, practicality, and future potential of Psychogeodata. The existing code produces interesting results, and there’s much more that can be added and experienced.

(Part one of this series can be found here, part two can be found here . The source code for the Psychogeodata library can be found here .)

Categories
Aesthetics Art Art Computing Art Open Data Free Culture Free Software Generative Art Howto Projects Satire

Psychogeodata (2/3)

derive_sem

Geodata represents maps as graphs of nodes joined by edges (…as points joined by lines). This is a convenient representation for processing by computer software. Other data can be represented in this way, including words and their relationships.

We can map the names of streets into the semantic graph of WordNet using NLTK. We can then establish how similar words are by searching the semantic graph to find how far apart they are. This semantic distance can be used instead of geographic distance when deciding which nodes to choose when pathfinding.

Mapping between these two spaces (or two graphs) is a conceptual mapping, and searching lexicographic space using hypernyms allows abstraction and conceptual slippage to be introduced into what would otherwise be simple pathfinding. This defamiliarizes and conceptually enriches the constructed landscape, two key elements of Psychogeography.

The example above was created by the script derive_sem, which creates random walks between semantically related nodes. It’s easy to see the relationship between the streets it has chosen. You can see the html version of the generated file here, and the script is included with the Psychogeodata project at https://gitorious.org/robmyers/psychogeodata .

(Part one of this series can be found here, part three will cover potential future directions for Psychogeodata.)

Categories
Art Art Computing Art History Free Culture Free Software

Source Code

The part of my review of “White Heat Cold Logic” that seems to have
caught people’s attention is:

“for preservation, criticism and artistic progress (and I do mean
progress) it is vital that as much code as possible is found and
published under a Free Software licence (the GPL). Students of art
computing can learn a lot from the history of their medium despite the
rate at which the hardware and software used to create it may change,
and code is an important part of that.”

http://www.furtherfield.org/features/reviews/white-heat-cold-logic

I have very specific reasons for saying this, informed by personal
experience.

When I was an art student at Kingston Polytechnic, I was given an
assignment to make a new artwork by combining two previous artworks: a
Jackson Pollock drip painting and a Boccioni cyclist. I could not “read”
the Boccioni cyclist: the forms did not make sense to me, and so I was
worried I would not be able to competently complete the assignment. As
luck would have it there was a book of Boccioni’s drawings in the
college library that included the preparatory sketches for the painting.
Studying them allowed me to understand the finished painting and to
re-render it in an action painting style.

When I was a child, a book on computers that I bought from my school
book club had a picture of Harold Cohen with a drawing by his program
AARON. The art of AARON has fascinated me to this day, but despite my
proficiency as a programmer and as an artist my ability to “read”
AARON’s drawings and to build on Cohen’s work artistically is limited by
the fact that I do not have access to their “preparatory work”, their
source code.

I have been told repeatedly that access to source code is less important
than understanding the concepts behind the work or experiencing the work
itself. But the concepts are expressed through the code, and the work
itself is a product of it. I can see a critical case being made for the
idea that “computer art” fails to the extent that the code rather than
the resultant artwork is of interest. But as an artist and critic I want
to understand as much of the work and its history as possible.

So my call for source code to be recovered (for historical work) and
released (for contemporary work) under a licence that allows everyone to
copy and modify it comes from my personal experience of understanding
and remaking an artwork thanks to access to its preparatory materials on
the one hand and the frustration of not having access to such materials
on the other. And I think that awareness of and access to source code
for prior art (in both senses of the term) will enable artists who use
computers to stop re-inventing the wheel.

If you are making software art please make the source code publicly
available under the GPL3+, and if you are making software-based net art
please make it available under the AGPL3+ .

Categories
Art Computing Art Open Data Free Software Projects

R Cultural Analytics Library Update

The R Cultural Analytics library has been updated to remove any dependency on EBImage (which in turn has a dependency on ImageMagick that complicates installation on many systems). In now uses raster images instead. This has also made the code faster.

You can find the new version and installation instructions here:

https://r-forge.r-project.org/R/?group_id=1249

Categories
Aesthetics Art Computing Art Open Data Projects

The R Cultural Analytics Library

I have gathered together much of the code from my series of posts on Exploring Art Data as a library for the R programming language which is now available as a package on R-Forge:

https://r-forge.r-project.org/projects/rca/

I will be adding more code to the library over time. It’s very easy to install, just enter the following into an R session:

install.packages("CulturalAnalytics", repos="http://R-Forge.R-project.org")

The library includes code for ImagePlot-style image scatter plots, colour histograms and colour clouds and other useful functions. The examples in the documentation should help new users to get started quickly.

R is the lingua franca for statistical computing, and I believe that it’s important for art and digital humanities computing to avail itself of its power.

Categories
Art Computing Art Open Data Free Software

Exploring Art Data 23

Having written a command-line interface (CLI), we will now write a graphical user interface (GUI). GUIs can be an effective way of managing the complexity of software, but their disadvantage is that they usually cannot be effectively scripted like CLI applications and that they usually cannot be extended or modified as simply or as deeply as code run from a REPL.

That said, if software is intended as a stand-alone tool for performing tasks that will not be repeated and do not require much setup, a GUI can be very useful. So we will write one for the code in image-properties.r

As with the CLI version, we will run this code using RScript. The script can be run from the command line, or an icon for it can be created in the operating system’s applications menu or dock.

#!/usr/bin/env Rscript
## -*- mode: R -*-


The GUI framework that we will use is the cross-platform gWidgets library. I have set it up to use Gtk here, but Qt and Tk versions are available as well. You can find out more about gWidgets at http://cran.r-project.org/web/packages/gWidgets/index.html.

## install.packages("gWidgetsRGtk2", dep = TRUE)
require(gWidgets)
options("guiToolkit"="RGtk2")


We source properties-plot.r to load the code that we will use to plot the image once we have gathered all the configuration information we need using the GUI

source('properties-plot.r')


The first part of the GUI that we define is the top level window and layout. The layout of the top level window is a tabbed pane of the kind used by preferences dialogs and web browsers. We use this to organise the large number of configuration options for the code and to present them to the user in easily understood groupings.
Notice the use of “layout” objects as matrices to arrange interface widgets such as buttons within the window and later within each page of the “notebook” tabbed view.

win


The first tab contains code to create and handle input from user interface elements for selecting the kind of plot, the data file and folder of images to use, and the file to save the plot as if required. It also allows the user to specify which properties from the data file to plot.

table


We use functions to allow the user to choose the data file, image folder, and save file. Using the GUI framework's built-in support for file choosing makes this code remarkably compact.

setDataFile


Often part of the GUI must be updated, enabled or disabled in response to changes in another part. When the user selects a "Display" plot we need not require the user to select a file to save the plot in, as the plot will be displayed in a window on the screen. The next functions implement this logic.

updateSaveFile


The second tab contains fields to allow the user to configure the basic visual properties of the plot, its height, width, and background colour.

table


The third tab allows the user to control the plotting of images, labels, points and lines.

table


The fourth (and final) tab allows the user to manage how the axes are plotted.

table4


Having created the contents of each tab, we set the initial tab that will be shown to the user and display the window on the screen.

svalue(nb)


Next we will write code to set the values of the global variables from the GUI, and perform a render. Until then, we can define a do-nothing renderImage function to allow us to run and test the GUI code.

renderImage


If we save this code in a file called propgui and make it executable using the shell command:

chmod +x propgui

We can call the script from the command line like this:

./propgui

We can enter values into the fields of the GUI, choose files, and press buttons (although pressing the Render button will of course have no effect yet).

Categories
Art Computing Art Open Data Satire

Digital Evaluation Of The Humanities

Humanities Computing dates back to the use of mainframe computers with museum catalogues in the 1950s. The first essays on Humanities Computing appeared in academic journals in the 1960s, the first conventions on the subject (and the Icon programming language) emerged in the 1970s, and ChArt was founded in the 1980s. But it isn’t until the advent of Big Data in the 2000s and the rebranding of Humanities Computing as the “Digital Humanities” that it became the subject of moral panic in the broader humanities.

The literature of this moral panic is an interesting cultural phenomenon that deserves closer study. The claims that critics from the broader humanities make against the Digital Humanities fall into two categories. The first is material and political: the Digital Humanities require and receive more resources than the broader humanities, and these resources are often provided by corporate interests that may have a corrupting influence. The second is effectual and categorical: it’s all well and good making pretty pictures with computers or coming up with some numbers free of any social context, but the value of the broader humanities is in the narratives and theories that they produce.

We can use the methods of the Digital Humanities to characterise and evaluate this literature. Doing so will create a test of the Digital Humanities that has bearing on the very claims against them by critics from the broader humanities that this literature contains. I propose a very specific approach to this evaluation. Rather than using the Digital Humanities to evaluate the broader humanities claims against it, we should use these claims to identify key features of the broader humanities self-image that they use to contrast themselves with the Digital Humanities and then evaluate the extent to which the literature of the broader humanities actually embody these features.

This project has five stages:

1. Determine the broader humanities’ claims of properties that they posses in contrast to the Digital Humanities.
2. Identify models or procedures that can be used to evaluate each of these claims.
3. Identify a corpus or canon of broader humanities texts to evaluate.
3. Evaluate the corpus or canon using the models or procedures.
4. Use the results of these evaluations as direct constraints on a theory of the broader humanities.

Notes on each stage:

Stage 1

I outlined some of the broader humanities’ claims against the Digital Humanities above that I am familiar with. We can perform a Digital Humanities analysis of texts critical of the Digital Humanities in order to test the centrality of these claims to the case against the Digital Humanities and to identify further claims for evaluation.

Stage 2

There are well defined computational and non-computational models of narrative, for example. There are also models of theories, and of knowledge. To the extent that the broader humanities find these insufficient to describe what they do and regard their use in a Digital critique as inadequate they will have to explain why they feel this is so. This will help both to improve such models and to advance the terms of the debate within the humanities.

One characteristic of broader humanities writing that is outside of the scope of the stated aims of this project but that I believe is worthwhile investigating are the extents to which humanities writing is simply social grooming and ideological normativity within an educational institutional bureaucracy, which can be evaluated using measures of similarity, referentiality and distinctiveness.

Stage 3

It is the broader humanities’ current self-image (in contrast to its image of the Digital Humanities) that concerns us, so we should identify a defensible set of texts for analysis.

There are well established methods for establishing a corpus or canon. We can take the most read, most cited, most awarded or most recommended articles established by a particular service or institution from a given date range (for example 2000-2009 inclusive or the academic year for 2010). We can take a reading list from a leading course on the subject. Or we can try to locate every article published online within a given period. Whichever criterion we choose we will need to explicitly identify and defend it.

Stage 4

Evaluating the corpus or canon will require an iterative process of preparing data and running software then correcting for flaws in the software, data, and models or processes. This process should be recorded publicly online in order to engender trust and gain input. To support this and to allow recreation of results the software used to evaluate the corpus or canon, and the resulting data, must be published in a free and open source manner and maintained in a publicly readable version control repository.

Stage 5

Stage five is a deceptive moment of jouissance for the broader humanities. It percolates number and model into narrative and theory, but in doing so it provides a test of the broader humanities’ self-image.

For the broader humanities to criticise the results of the project will require its critics to understand more of the Digital Humanities and of their own position than they currently do. Therefore even if the project fails to demonstrate or persuade it will succeed in advancing the terms of the debate.

Categories
Art Computing Art Open Data

Exploring Art Data 22

So far we have used the R REPL to run code. Let’s write a script that provides a command-line interface for the plotting code we have just written.
A command-line interface allows the code to be called via the terminal, and to be called from shell scripts. This is useful for exploratory coding and for creating pipelines and workflows of different programs. It also allows code to be called from network programming systems such as Hadoop without having to convert the code.
To allow the code to be called from the command line we use a “pound bang line” that tells the shell to use the Rscript interpreter rather than the interactive R system.

#!/usr/bin/env Rscript
## -*- mode: R -*-


Next we import the “getopt” library that we will use to parse arguments passed to the script from the command line.

library('getopt')


And we import the properties-plot.r code that we will use to perform the actual work.

source('properties-plot.r')


The first data and functions we write will be used to parse the arguments passed to the script by its caller. The arguments are defined in a standard format used by the getopt library.

args


Now we have the arguments we can process them. We check for the presence of arguments to see whether the user has provided them by checking whether its value is not null.
It's traditional to handle the help flag first.

if(! is.null(opt$help)) {
self = commandArgs()[1];
cat(paste(getopt(args, usage=TRUE)))
q(status=1)
}


Next we check for required arguments, those arguments that the user must have provided in order for the code to run. Rather than checking each argument individually we list the required arguments in a vector and then check for their presence using set intersection. If the resulting set isn't empty, we build a string describing the missing arguments and use it to print an error message before exiting the script.

required


Then we set the global variables from properties-plot.r to the command line arguments that have been provided for them. We map the argument name to the variable name and then where it is present we use the assign function to set the variable.

value.mappings


Some arguments need to be set to a boolean value if a particular argument is present as a flag or not. We use a similar technique for this, but the matrix containing themapping from argument to variable also has a boolean value that is used to set the variable rather than fetching an argument value.

boolean.mappings


The render type is specified through the arguments passed to the script, but we only want to perform one kind of render. We check that only one kind of render was specified or else we quit with an informative error message.

renderTypeCount 1){
cat("Please specify only one of png, pdf or display to render\n")
q(status=1)
}


We get the file name to save the render as, if needed.

getOutfile\n")
q(status=1)
}
opt$outfile
}


The last bit of configuration we get is the column to use for filenames in the data file, if it's provided, otherwise we default to "filename".

getFilenameColumn


The last function we define in the script performs the render specified in the arguments to the script.

render


Finally, outside of any function, we call the functions we have defined in order to do the work of processing the parameters and calling the code.

checkRequiredArgs(opt, required)
valueOpts(opt, value.mappings)
booleanOpts(opt, boolean.mappings)
render(opt)


If we save this code in a file called propcli and make it executable using the shell command:

chmod +x propcli

We can call the script from the command line like this:

./propcli --datafile images.txt --imagedir images --xcolumn saturation_median --ycolumn hue_median