Urinal: http://OFFLINEZIP.wpsho/art/urinal/ Balloon Dog: http://www.furtherfield.org/projects/balloon-dog-rob-myers Exploring Art Data: https://encrypted.google.com/search?q=site%3Arobmyers.org+%22exploring+art+data%22 Art Open Data: http://blog.okfn.org/2011/02/01/art-open-data/ The Colours In My Studio: http://OFFLINEZIP.wpsho/art/studio-colours/ Streaming Aesthetics (Shape): http://OFFLINEZIP.wpsho/weblog/2011/08/27/streaming-aesthetics-shape/ Send Values: http://OFFLINEZIP.wpsho/weblog/2011/09/09/sendvalues/ Baldessarinator: http://OFFLINEZIP.wpsho/weblog/2011/09/25/baldessarinator/ Uploads: http://gitorious.org/robmyers/uploads The R Cultural Analytics Library: https://r-forge.r-project.org/projects/rca/ Psychogeodata: http://OFFLINEZIP.wpsho/weblog/2011/12/31/psychogeodata-33/ Mona Lisa Of Disapproval: http://gitorious.org/robmyers/mona-lisa-of-disapproval Reviews: http://www.furtherfield.org/user/rob-myers
Category: links
Since last April I’ve been posting collections of links to Netbehaviour . These are links that I’ve found during my web browsing that are on the subject of art, technology and society. I try to arrange them to create associations or narratives wherever possible.
I’ve written a script to convert a calendar year’s worth of links from emails to an HTML page for browsing.
Here it is:
#!/usr/bin/env python # Copyright 2012 Rob Myers# Licenced GPLv3 or later ################################################################################ # Imports ################################################################################ import cgi import email import mailbox import re import sys import time ################################################################################ # Configuration ################################################################################ links_year = "2011" mailbox_path = "/home/rob/.thunderbird/tq4afdtc.default/ImapMail/imap.robmyers.org/INBOX.sbd/Archives-1.sbd/2011" ################################################################################ # The messages ################################################################################ messages = [message for message in mailbox.mbox(mailbox_path).itervalues() \ if message['subject'] \ and message['subject'].startswith('[NetBehaviour] Links') \ and links_year in message['date']] # Sort messages by date. As they may have been files out of order # Wasteful as we parse it again later messages.sort(key=lambda m: time.mktime(email.utils.parsedate(m['Date']))) ################################################################################ # Reformat and print the links with their commentary ################################################################################ print " Links For %s " % links_year print "Links For %s
" % links_year for message in messages: # Keep track of whether the last line was commentary (or links/whitespace) last_line_was_commentary = False # Print a YYYY-MM-DD date as the title date = email.utils.parsedate(message['Date']) print '%s-%s-%s
' % (date[0], date[1], date[2]) # Email structure is...interesting... for part in message.walk(): if part.get_content_type() == "text/plain": body = part.get_payload(decode=True) break elif part.get_content_type() == "text/html": body = part.get_payload(decode=True) # Strip html tags to give plain text body = re.sub(r'<.*?>', '', body) # Keep trying to find text # Strip footer try: body = body.split('_______________________')[0] except: print >> sys.stderr, "Can't get body for %s %s" % (message['date'], message['subject']) pass # Regularize leading and trailing whitespace body = body.strip() for line in body.split('\n'): stripped = line.strip() if '://' in stripped: print '
' print '%s' % (stripped, stripped) print '
' last_line_was_commentary = False elif stripped != '': # Join multi-line commentary into single line if last_line_was_commentary: print ' ', print '%s' % cgi.escape(line) last_line_was_commentary = True else: last_line_was_commentary = False print '
' print 'Links curated by Rob Myers.
' print ''
And you can download an archive of the links here: links-2011.html.gz
There are a couple of glitches in the file as a result of the ad-hoc nature of the original emails. Finding them is left as an exercise for the reader.
Art Open Data Links
Jonathan Gray’s slides on “Open Data in the Arts and Humanities”:
http://www.slideshare.net/jwyg/open-data-in-the-arts-and-humanities
Ben Werdmuller von Elgg’s blog post “Open data in the arts: an introduction”:
http://www.festivalslab.com/open-data-in-the-arts-an-introduction?c=1
Culture Grid Hack day (now delayed until early next year):
http://www.colourlovers.com/business/blog/2010/09/15/the-most-powerful-colors-in-the-world
Quantitative Aesthetics: the most popular colours in the web’s brand logos.
http://blog.p2pfoundation.net/remi-sussan-hacking-the-sacred-project/2010/09/17
“I have just created a google group about hierohacking: the goal of
this group will be to discuss applied neurotheology, see how we can
“hack the sacred”, use intelligently and rationally religious thinking
and practices for personal purposes; discussion will be of course about
various ASC (altered states of consciousness), technologies, BCI, etc.
But also about the creation of symbolic architectures and mythologies.“
http://blog.crowdflower.com/2010/09/mechanical-proust-an-automated-crowd-written-blog
Crowdsourced literature: Amazon Mechanical Turk workers asked to write a Proust blog.
http://blog.makezine.com/archive/2010/09/15th_anniversary_hackers_party.html
15th Anniversary “Hackers” Party – Celebrating the 1995 film about hacking that’s so bad, hackers love it.
Steampunk Primary Sources II
More PDFs (and sometimes epubs) of original Victorian and Edwardian books of interest to Steampunks. See part one here, and how these books were found here.
Automata (1893)
http://www.archive.org/details/automataoldandn00unkngoog
Automobiles (1900)
http://www.archive.org/details/horselessvehicle00hiscrich
Babbage On The Difference Engine (1864)
http://books.google.com/books?id=Fa1JAAAAMAAJ
Engineering (1860)
http://books.google.com/books?id=rkkOAAAAYAAJ
Factories (1844)
http://books.google.com/books?id=dXs4AAAAMAAJ
The Jacquard Loom (1895)
http://www.archive.org/details/jacquardweaving00bellgoog
Lovelace On The Difference Engine (1853)
http://www.archive.org/details/scientificmemoir03memo
(also http://www.fourmilab.ch/babbage/sketch.html)
Mechanisms (1868)
http://books.google.com/books?id=vOhIAAAAMAAJ
Shipbuilding (1869)
Wikimedia Hates Art
http://identi.ca/tag/wikimediahatesart
I have a lot of respect for the Wikimedia Foundation, everyone I’ve met from it have been great people and I use their software and projects daily. I was proud to take part in the Wikipedia Loves Art event earlier this year. But as an artist I am disappointed and offended by Wikimedia’s treatment of a contemporary art project.
Whatever lawyers who charge for each letter they send out on your behalf may tell you, and whatever your opinion of contemporary art, there are strong precedents in the US supporting free speech under the first amendment for artists who use trademarks. To demand that artists transfer resources to a trademark holder or face legal action is therefore not just a chilling effect on free speech but legally shaky.
The EFF, to their credit, point this out here –
http://www.eff.org/deeplinks/2009/04/wikipedia-threatens-
And details on an artwork and lawsuit that provide an important precedent can be found here –
http://www.barbieinablender.org/
Wikimedia’s response has been to disparage the concerns of the artists and the EFF –
http://lists.wikimedia.org/pipermail/foundation-l/2009-April/051505.html
Other web sites have picked up on this, and are supporting the artists –
http://newsgrist.typepad.com/underbelly/2009/04/wikipedia-threatens-artists-for-fair-use.html
http://freeculturenews.com/2009/04/23/wikipedia-accuses-web-site-of-trademark-violation/
The problem with Wikimedia’s over-reaching application of their trademark to the material detrement of artists is a chilling effect on freedom of speech. Wikimedia owe the artists and the EFF an apology. This behaviour really is beneath such an excellent organization.
#!/bin/bash
# Copyright 2009 Rob Myers
# Licenced under the GPL 3 or, at your option, any later version.
# Produce a Plucker version of Free Software, Free Society
# Some texinfo errors not fixed
# Convert eps images to GIFs
convert images/clib.eps images/clib.gif
convert images/code.eps images/code.gif
convert images/flex.eps images/flex.gif
convert images/free_software_song.eps images/free_software_song.gif
convert images/headMain.eps images/headMain.gif
convert images/party.eps images/party.gif
convert images/richard.eps images/richard.gif
convert images/philosophical-gnu.eps images/philosophical-gnu.gif
# Fix texinfo problems
perl -pe 's/@heading\{(.*)\}/@heading $1/' -i fs_for_freedom.texi
perl -pe 's/^\\input texinfo_times.tex//' \
-i rms-essays.texi
echo "\
@ifnottex
@alias unnumberedfootnote = footnote
@end ifnottex
@ifnottex
@macro sp1
@sp 1
@end macro
@end ifnottex
@include rms-essays.texi
" > rms-essays-html.texi
# Convert to plucker
makeinfo --html --no-headers --no-split --force -o rms-essays.html \
rms-essays-html.texi
perl -pe 's/^(
FLOSS Manuals To Plucker
#!/bin/bash
# Copyright 2009 Rob Myers
# Licenced under the GPL version 3 or, at your option, any later version.
if [ "$1" == "" ]; then
echo "Please enter name of manual directory on server (e.g. FlossManuals)."
echo "You can find this by going to the printable version of the manual."
exit 1;
fi
plucker-build --zlib-compression --stayonhost --bpp=8 -p . \
-f $1 --staybelow=http://en.flossmanuals.net/floss/pub/$1/ \
http://en.flossmanuals.net/$1/print
links for 2007-11-21
-
“It appears the birthplace of Rome’s founders, Romulus and Remus, has been unearthed. “