Freemasons Of The Future
The Semantic Web, a machine readable representation of everything, is a future that has already started to arrive. The University of Openess’ Faculty of Cartography looks at its dual potential to flatten and diversify the relations between data and existence. By Saul Albert, Simon Worthington, and Fabian Thompsett, with help from Ben Russell, Jo Walsh and Asim Butt.
At the Next 5 Minutes hacktivist conference, September 2003, an interest group formed around the topic of cartography. We organised a series of meetings and presentations to discuss developments in collaborative cartography, location-based services, information visualisation and the Semantic Web. As a contribution to this work, the University of Openess’ Faculty of Cartography wrote the following text to introduce some of the problems and potentials posed by the Semantic Web for the Free Information movement.
The first and most salient fact about the Freemasons of the Future is that they do not exist. However they are sentient beings trying to struggle into existence. From their perspective they are involved in a life and death struggle to ensure their own past, some of which we perceive as the present. Located in the distant future after time travel has become commonplace they endeavour to sojourn into what they regard as history to create the conditions which they consider as necessary for their own existence. But this does not mean they are going to be successful.
They justify their existence upon their existence, which they consider self-evident. The current conditions of life are to be extrapolated into the future until every quanta of human activity has been commodified: from genetic engineering to nanotechnology, the search for profitability is projected in every conceivable way. A world where sentient scraps of human biology exist as islands of wetware within the framework of vast cathedrals of computerised electronics. The distinction between human, animal and machine is dissolved as these products of bio-engineering are installed to fulfil operative functions within a nauseous system developed to do nothing but to manifest continuously expanding value. Whether we can regard such creatures as our offspring, or whether they are simply genetically engineered mutant beings created out of this or that strand of DNA is perhaps beside the point. This is the nightmare world to which we are heading, and which would provide the sort of massive bio-computer needed by the Freemasons of the Future to realise their greatest desire: unequivocal existence.
Faced with this onslaught which we can see around us, as all barriers to genetic engineering and the conversion of existence into docile information are torn down one by one, how can we respond? The class struggle now manifests itself in dimensions which have recently been invaded by the process of industrialisation: from the industrialisation of the imagination, through television, to the industrialisation of knowledge through the internet, the information age continues to build on the ‘achievements’ of the Age of Steam, the Age of Petrol and the Atomic Age. The current episode we are living through is rattling asunder as the ripples of the Quantum Time Bomb1 penetrate the deepest recesses of human activity.
– Harry Potter2, Extract from the announcement of the Limehouse TimeTravel Rally3It has become inaccurate to discuss the ‘the web’ as a single entity since this use of a definite article belies the increasingly electrical interconnectedness of a plethora of devices, processes, information and indices. ‘The web’ is inadequate because it implies a coherence that is not evident in the use of many incompatible formats, private networks, and non-indexed sections of network. This incoherent, frayed mess of networks is like an expanding and obscure territory for which there are no maps, or at least, no maps with standard keys, scales or control coordinates. In some ways ‘surfing’ or ‘browsing’ are increasingly appropriate metaphors for the superficial and indiscriminate ways our browsers allow us to use the web. These limited research excursions are almost entirely dependent on the indices of one of the major search engines (Google in most cases) which has become the limit of the network; everything else is uncharted, unconnected and therefore largely inaccessible. By attempting to develop an extensible and syntactically coherent language to describe networks and information resources, the Semantic Web Project promises (or threatens) to help map this lost world of data. ‘The Semantic Web’, Tim Berners-Lee explains, ‘is an extension of the current web in which information is given welldefined meaning, better enabling computers and people to work in cooperation.’4 Using computer readable data formats and programmable agents with which to collect and categorise them, the object is to produce a schema from which to build a local description of local data formats, network topographies and information resources. This local description, fitting into the logical framework of the Semantic Web, can then be transposed into other contexts, linked to similar or related descriptions of other resources and networks, understood and used by human and software agents; put on the map. To expedite the growth of colonial empire, admirals Cook and Vancouver pioneered new forms of cartography in the late 18th century, a period sometimes referred to as the ‘cartographic reformation’. Where they had no empirical, controlled data for their maps, they simply left large blank sections rather than filling in the gaps with supposition, thematic motifs or ‘here be dragons’. This was the origin of a powerful set of scientistic norms entailing representations of the world that are largely still intact, keyed into subsequent cartographic and spatial technologies. The initial impact of the cartographic reformation is comparable to that of the Semantic Web, with both working to reveal enormous blank spaces in our maps and the limited uses of our networks. In turn, they set out a framework by which these topographies might be described, understood, and mapped.
In the mid ’90s computer scientist Ramanathan V. Guha went to work for Apple, where he developed a metadata format called Meta Content Framework which described websites, files, filesystems and relationships between them. The intention was that, using Apple’s ‘Hotsauce’ browser, users could fly through a 3-dimensional representation of that content. However, it was only when Guha moved to Netscape in ’97 and Extensible Markup Language (XML) became a common standard for the exchange of structured, computer-readable data that his ideas about representing semantic associations between bits of data began to gain influence. At that time the World Wide Web Consortium (W3C), the international web standards body founded by Tim Berners-Lee, began a general-purpose metadata project, loosely termed the ‘Semantic Web’, to develop ways of representing data on the web. Based on the Resource Description Framework (RDF), the basic idea is that resources are named with Uniform Resource Locators (URLs or web addresses) and described by the links between them using machine readable XML as a syntax. The framework is general enough that it is not limited to describing data on the web and, crucially, it can also be used to describe and interrelate things in the world: people, places, objects, abstract concepts – the largest blank spaces on the semantic map. If we can assign a URL to a physical object, person, or idea, this URL can in turn be linked to other URLs referring to other people, objects, ideas or links. Someone (or something) looking at this association can then make inferences about what is being represented from its associations, which can be further described and qualified by more links. The ‘namespaces’ – machine-readable XML documents that group together the vocabularies used in these descriptions – can also be seen as nodes in this semantic network and linked to, extended, re-written and re-defined. Hence these representations are always contingent and nonoriginary; there is no start or end point, and no point of observation that can be outside them, just new nodes in the network.
The totalising rhetoric of the Semantic Web project was very evident in one of its key predecessors, the CYC corporations’ proprietary ‘common sense’ knowledgebase. This ‘big AI’ project was Guha’s first job out of university, and involved the collation of a huge database of socalled ‘common sense’ statements. These statements were machine-readable so that software agents would be able to search through and make inferences based on them. A typical example is a CYC-based search engine that could respond to the question ‘what is bravery?’ by looking through its knowledgebase, finding an assertion that a property of ‘brave’ is ‘danger’, finding another saying that rock climbing is dangerous, and then retrieving a picture of a rock climber.
The notion of collating all ‘common sense’ (or ‘consensual reality’ as CYC corporation sometimes put it) as a basis for artificial intelligence is a genuinely totalising and largely discredited idea. This problem, and the fact that the format of the knowledgebase and the modes and methods used to describe its contents were fixed, prescribed by CYC’s designers and their proprietary legal structures, frustrated Guha and gave him and his collaborators the impetus to break with CYC and attempt to formulate a more malleable framework. The development of the Semantic Web – a machinereadable representation of everything, and its relationship to everything else – does sound like a step towards the kind of universal dataveillance exemplified by DARPA’s discredited ‘Total Information Awareness’ program. It is true that the enriched and extensible vocabularies that the Semantic Web uses to describe relationships will hasten morally dubious activities such as surveillance, unsolicited direct marketing and military operations. These technologies will refine existing authoritarian systems for associating and describing things and people (such as consumer profiling systems) which are usually imposed without negotiation or consent, and, since they are limited to a definition of the person as a consumer, remain very unsophisticated.
However, the extensibility of the Semantic Web, the fact that the person doing the describing can define the terms, the ‘vocabulary’ of that description, suggests a less totalising, more heterogeneous ‘information awareness’. This is both promising and potentially dangerous. Augmented by many more layers of information and description voluntarily supplied by the person being represented, the ‘consumer profile’ becomes infinitely more insidious and detailed. At the same time, the greater sophistication of the Semantic Web’s descriptive language enables someone to consciously and deliberately allow or deny access to specific data that they produce. Using cryptography, and ‘friend of a friend’ testimonial systems (sometimes called ‘trust’ networks) at least offers some degree of control over and awareness of the data being exchanged about us. On a more structural level, the development of many divergent, even antagonistic descriptions of the world and the people in it moves away from the idea of any imposed ‘consensual reality’ and suggests a mode of representation that can be multiply subjective.
RDF was developed as an open framework from philosophical inquiries by W3C about creating universal categorising systems, with the understanding that such a framework can never be comprehensive, hence the ability to add and modify the vocabularies used to describe and categorise things.
The Semantic Web’s use of the RDF common framework allows the data used in each description to be fully distributed in terms of storage and authorship. Not only can groups collate and share their own data, but also automate the aggregation and inclusion of publicly accessible data sources such as company profits, IMF trade data, names and connections between regulatory board members etc.
RDF’s more widely-known derivative is Rich Site Summary (RSS), a format often used to syndicate news stories and blog postings between websites. Both RDF and RSS are machine readable web standards for expressing metadata (data about data) but whereas RSS has a predetermined and fixed vocabulary specifically for reading news, RDF is an extensible common framework for vocabularies, and their namespaces. Using the framework of RDF you can create an ordered list about a category of things (a namespace). For example, the Foaf Corp namespace which came about as a vocabulary to convert the They Rule [http://theyrule.net] project into a Semantic Webcompatible format, started with the original vocabulary below:* fc (foaf corp)* fc: Company* fc: Committee* fc: Board* fc: Member* fc: Stock code* fc: FilingsLater, at the Cartographic Congress held in London’s Limehouse Town Hall in June 2003 (see Mute 26), the MCC (Mapping Contemporary Capitalism) project proposed the following additions and extensions:* fc: Owns – internal, external* fc: Shareholders – list of shareholders, number of shares on each market, percentage of shares* fc: Company employs – (this is a crude category which will display multiple categories: business management, investment banking, marketing, personnel etc)* fc: Company is funding – (this data may be unavailable but we can draw many inferences from its patchiness)* fc: Company affiliation – company member affiliation (e.g. Gate Foundation).* fc: Company’s geographical locations ONTOLOGIES
Once web content has been formatted using an RDF vocabulary from a namespace, such as FoafCorp, then it becomes possible to infer meaning from the associations between the things it describes. To make those inferences, the Semantic Web uses ‘Web Ontology Language (OWL)’, a language for asking logical questions about metadata, to ask questions about the assertions in RDF documents.
‘An ontology defines the terms used to describe and represent an area of knowledge. Ontologies are used by people, databases, and applications that need to share domain information (a domain is just a specific subject area or area of knowledge, like medicine, tool manufacturing, real estate, automobile repair, financial management, etc.). Ontologies include computer-usable definitions of basic concepts in the domain and the relationships among them (note that here and throughout this document, definition is not used in the technical sense understood by logicians). They encode knowledge in a domain and also knowledge that spans domains. In this way, they make that knowledge reusable‘ – ‘W3C Working Draft’ 3 February 2003, [http://www.w3.org/TR/2003/WD-webont-req-20030203/#onto-def]
A set of OWL ontology code could include a namespace and an initial set of URLs to visit, then call on a number of logical declarations. Because the Semantic Web deals with web content, it is inherently distributed, so expect OWL ontologies to also be distributed. One consequence of this is that OWL generally makes an ‘open world assumption’ allowing it to move across networks, finding new bits of RDF metadata, new assertions and new questions, and adding them to the initial ontology.
OWL would be employed in the form of a ‘bot, spider or scutter’, a set of code sent out onto the web to gather and interpret RDF data. For example, technology writer Ed Dumbill’s ‘FOAFBot’ sits on an IRC channel, listening for snippets of the conversation that it is programmed to understand:‘<edd> foafbot, edd’s name<foafbot> edd’s name is ‘Edd Dumbill’, according to Dan Brickley, Anon35, Niel Bornstein, Jo Walsh, Dave Beckett, Edd Dumbill, Matt Biddulph, Paul Ford’ The FOAFbot is invoked when Edd calls its name in the IRC channel, and then responds to his command ‘edd’s name’ by searching through the statements in the FOAF files of Dan Brickley, Anon35, Niel Bornstien etc. and inferring from those that the nickname ‘edd’ refers to ‘Edd Dumbill’. It can retrieve any information about Edd that is available in the statements in those FOAF files, such as links to pictures of ‘Edd’ or lists of the people that Edd says he knows in his FOAF file.5 This simple functionality can then be re-used by other bots, built on and re-purposed to create hugely complex and nuanced systems of distributed information storage and retrieval.
FREE ASSOCIATION
Prior to the invention of the printing press, there was no such thing as an index. Books copied by hand would have different pagination, so the idea of correlating specific sections in the book with certain ideas, and collating them in an index at the back never occurred. Similarly, without standardised grammar, spellings or spacings between words, hand-written script tended to run into long, unbroken lines of letters that needed to be read out and understood aurally for meaning to emerge. The visual comprehension of words on a page without a spoken and heard intermediate stage was, again, a development of the printing press. These two developments made possible access to, and use of information with formerly unimaginable speed and sophistication. The Semantic Web promises a similar acceleration and transformation in our relationship with information. The vision of computers and people, working in ‘co-operation’ as Berners-Lee puts it, casts aside superficial metaphors of ‘pages’ to be ‘explored’ or ‘navigated’ and instead proposes the web as a growing network of prosthetic comprehension and, potentially, a treacherous one.
‘The third wave of network attacks is semantic attacks: attacks that target the way we, as humans, assign meaning to content.‘– Bruce Schneier, Semantic Attacks, ‘The Third Wave of Network Attacks’, Crypto-gram Newsletter, October 2000.6
Although here Bruce Schneier is talking about the imminent threat of a catastrophic hacker assault on computer security systems, he could just as well be referring to the standard operation of certain search engines. Although Google currently maintains a fairly clean track-record with regards to how it indexes, ranks and displays its search results, the potential for massaging and manipulating those operations is huge. Dependence on a single system of information association, particularly an unaccountable commercial system whose ownership may change at any moment, makes our use of the web very vulnerable to abuse. The enclosure of a potential ‘information commons’ by an anarchistic elite of corporate/state bodies is well underway. Alongside this enclosure, strong and vibrant hobbyist movements are flourishing. Free software activists, free hardware geeks and free networkers – natives of the information commons – are continuing to fiddle, peeking under the bonnet of their technologies, creating and manipulating their information environment as they see fit. Despite the problematic heritage of the Semantic Web project, it still has the potential to be used and developed into an important element within the Free Information movement. The three strands of this movement mentioned above share the SemWeb’s dubious origins, but are pursuing a difficult and tortuous course that avoids compliance with authoritarian and profitdriven exploitation. As it is, these movements are disparate, unconnected, resembling the state of the net itself; an incoherent mess of networks. Worse, the connections between these networks are almost always proprietary at some point. When you download free software, it will almost certainly be passing over a proprietary network and, somewhere in that transaction, there is a dependency on the permission and profit- margin of a corporation, a media owner, an ISP, the DNS system. You might not even have found out about the software if Google hadn’t permitted it to be indexed and returned in your search results.
Without the associations and indices that allow access to information, that information is inaccessible, valueless. As the density and quality of Semantic Web meta-content grows, that meta-content will become an extremely valuable asset in itself. To protect the integrity and trustworthiness of their meta-content, Semantic Web developers and meta-content producers will need to cooperate with and adopt similar legal defence strategies to the Free Software groups, asserting the intellectual property rights of an author to allow their works to be maintained in the public domain.
But here is the most treacherous part. Asserting intellectual property rights over associations, vocabularies, descriptions, the relationships between things in the world, as much as data on the web, is premised on the assumption that this kind of information must be seen as property. As the Semantic Web stretches over more and more areas of knowledge production, encompassing histories, identities, interpersonal relationships, and language, this assumption feeds nauseous system of selfindustrialisation and commodification, the process by which we are transforming ourselves into the Freemasons of the future.
Glossary and LinksSemantic Web: The web of data with meaning in the sense that a computer program can learn enough about what the data means to process itRDF – Resource Description Framework: Designed for expressing metadata about things in the form of ‘triples’, using vocabularies that are published on the web. See Mute Map vocabularies (above). An introductory (business-oriented) slideshow by Tim Berners- Lee has some interesting visualisations and talks about using an ‘RDF Integration Bus’ like the Mute Map Infomesh for applications [http://www.w3.org/2003/Talks/03-pcforum-tbl/slide15-4.html ] W3C RDF primer: [http://www.w3.org/TR/rdf-primer/ ]History of RDF by Tim Brey: [http://www.tbray.org/ongoing/When/200x/2003/05/21/RDFNet ] RSS – Rich Site Summary: An RDF vocabulary and RDF/XML format for distributing news, increasingly popular with websites. There are many newsreaders available, for example: [http://amphetadesk.com ] for Windows, and [http://www.netnewswire.com ] for Macs. There are also many RSS aggregation services like [http://syndic8.com ]. Easy to write ‘crawlers’ and ‘scrapers’ can convert HTML, email, irc, nttp etc. to RSS formatFOAF – FriendOfAFriend: A vocabulary for describing people and networks of people in RDF [http://rdfweb.org/foaf ] Friends of Corporate Friends (FOAFCorp): [http://rdfweb.org/foaf/corp ] FoafNaut: A visual tool for navigating the FOAF network done in SVG : [http://foafnaut.org ] A bug in FOAF, explaining the difficulties of modelling groups of people: [http://rdfweb.org/issues/show_bug.cgi?id=8 ] OWL – Web Ontology Language: A language (expressed in RDF) that allows us to apply logical and taxonomic constraints to RDF data and the things expressed in RDF vocabularies. Still in development [http://www.w3.org/TR/2002/WD-owl-guide-20021104/ ] SVG – Scalable Vector Graphics: An XML format for describing vector graphics with SMIL and javascript. It can do Flash-like things; it also does lovely scalable static images
An SVG organisational chart demo: [http://swordfish.rdfweb.org/discovery/2003/03/6deg.svg ] Carto.net, cartography and SVG: [http://www.carto.net ] SVG London tube map: [http://space.frot.org/rdf/tubemap.svg ] XML (Extensible Markup Language): A simplified successor to SGML, W3C’s generic language for creating new markup languages. Markup languages (such as HTML) are used to represent documents with a nested, treelike structure. XML is a product of W3C and a trademark of MIT
Scutter, spider, bot: In the Semantic Web context this would be a set of code containing logical instructions, that is then sent to a number of URIs to apply the code to RDF data it finds at these addresses Namespace: Repository for Semantic Web vocabulary
URI – Uniform Resource Identifier: The generic set of all names/addresses that are short strings which refer to resources URL – Uniform Resource Locator: An informal term (no longer used in technical specifications) associated with popular URI schemes: http, ftp, mailto, etc.
W3C (World Wide Web Consortium): A neutral meeting point of those to whom the web is important, with the mission of leading the web to its full potential If you are interested in taking part in Semantic Web or cartography projects, you are welcome to join the University of Openess Faculty of Cartography: [http://uo.theps.net/FacultyCartography ], or find more information at one of the key resources listed below:RDFweb and FOAF development: [http://rdfweb.org/ ]Geowanking – An important mapping list[http://lists.burri.to/mailman/listinfo/geowanking ]TheMuteMap – Semantic Web/ SVG development space[http://themutemap.3d.openmute.org ] & Mapping Contemporary Capitalism [http://themutemap.3d.openmute.org/modules/wakka/McC ] The Locative Media Lab: [http://locative.org/ ]
Footnotes
1 The Quantum Time Bomb is an expression which refers to the whole range of anomalies which will occur when a Quantum Computer is linked to the internet. Perhaps this has already happened on August 14th, when much of North America experienced a power cut.
2 For more information about this author, see [http://www.wikipedia.org/wiki/User:Harry_Potter ]
3 The Voodoo Science Club and the London Psychogeographical Society Historification Committee, Friday 22nd August 2003, announced overnight a cycle trip from Limehouse, London, to the Cave of the Illuminati, Royston, Herts [http://uo.theps.net/LimehouseTimeTravelRally ]
4 Tim Berners-Lee, James Hendler, Ora Lassila, ‘The Semantic Web’, Scientific American, May 2001
5 Download source code and find more information about Edd Dumbill’s FOAFbot at [http://usefulinc.com/foaf/foafbot/ ]
Mute Books Orders
For Mute Books distribution contact Anagram Books
contact@anagrambooks.com
For online purchases visit anagrambooks.com