A Computational Perspective

With the emergence of the so-called Linked Data Web or Semantic Web a key emerging challenge as we move from a Web of documents to a Web of linked data at a more fine-grained level is how we are to browse, explore and query such a Web at scale.

Collective Intelligence is the surprising result that collaborative endeavour with only light rules of social coordination can lead to the emergence of large-scale, coherent resources such as Wikipedia. What are the characteristics of such resources? Why do people contribute and how do they maintain a highly stable core body of connected content?

How do we support inference at a Web scale? What types of reasoning are possible? How is context represented and supported in Web inference?

How are concepts such as trust and provenance computationally represented, maintained and repaired on the Web?

As the Web has grown substantial amounts of it have become disconnected, atrophied or in others ways redundant. How are we to identify such necrotic and non-functional parts of the Web and what should be done about them?