Category Archives: Rensselaer Polytechnic Institute

Real-time Twitter Visualisations for the US 2016 Presidential Elections

Twitter Visualisation at RPI For the 2016 US Presidential election, researchers at the University of Southampton with support from the EPSRC funded project SOCIAM,  built a real-time data visualization that combined traditional polling data with social media posts. The application was built and designed for the Rensselaer Polytechnic Institute EMPAC Campfire, a novel multi-user, collaborative, immersive, computing interface that consist of a desk height panoramic screen and floor projection that users gather around and look into. The application is also a part of the Web Macroscope (a visualization platform developed at the University of Southampton) and uses data from the Southampton Web Observatory.

Data collection for the polling data was taking from the Huffington Post Pollster API, which collects all the popular polls and their results. The social media data was collected on Twitter, using both their Streaming and Search API. The Streaming API was used to create a stream of data that included 1% of all tweets that had any of the popular and official hashtags and words used by each campaign to show support for their candidate. This hashtag list included tags like ‘TeamTrump’, ‘maga’, ‘TeamTrump’, and ’draintheswamp’ in support for Donald Trump, and ‘LoveTrumpsHate’, ‘ImWithHer’, ‘StrongerTogether’, and ‘WhyIWantHillary’ in support for Hillary Clinton. Any tweets that mixed hashtags and words from both candidates were removed as this was normally done in a way to not show support for a candidate, but to react to supporters on the other side.Campfire visualisation of US election Twitter activity
Results from the visualizations showed different levels of support on Twitter for each candidates over time. In the days leading to the election on November 8th, tweets in support for Trump were 1.5 times greater than those in support for Clinton. Interestingly, on the day of the election, this ratio switched and levelled off. Around the 2pm EST on November 8th, tweets in support for Clinton were almost equal to the number of tweets supporting Trump. Later in the night of the election, the ratio of support changed again, with tweets in support of Trump being 1.14 times larger than those in support for Clinton.
Another interesting result from the data, was the how many tweets that had geographic information tagged to them were overwhelmingly in support for Clinton throughout the day leading and on the election. Most tweets streamed through the visualization had no GPS lat/long data embedded in them (these tweets often come from mobile phones using the Twitter App, with the optional GSP location data option enabled). As a whole, these geographic tweets are a small minority of the data collected from the Twitter Stream (about 1%). Interestingly, these geographic tweets supported Clinton 15 times more than Trump. Why this is the case is hard to say. It looks like Clinton supporters use mobile apps with location data more than Trump supporters.
Two other studies – one from researchers at USC, and another from Oxford University, the University of Washington and Corvinus University of Budapest,both showed that AI controlled bots were spreading pro-Trump content in overwhelming numbers. This created the illusion of more support for Trump on Twitter than make naturally been. Our results of geotagged tweets in support for Clinton, despite overall support from Trump on Twitter might be due to this issue of bots.
Authored by Dominic DiFranzo, 18 November 2016.

ACM Web Science 2017 at Rensselaer Polytechnic Institute, Troy NY

Rensselaer Polytechnic Institute https://www.flickr.com/photos/wfryer/22392839289/ Wesley Fryer ©2016 cc by https://creativecommons.org/licenses/by/2.0/
Rensselaer Polytechnic Institute/ Wesley Fryer ©2016/ cc by

WebSci17 is taking place at Rensselaer Polytechnic Institute (RPI) in Troy, New York, co-chaired by Professor Deborah L McGuinness (Tetherless World Senior Constellation Chair and Professor of Computer and Cognitive Science at RPI) and Professor Peter Fox (Tetherless World Constellation Chair and Professor of Earth and Environmental Science, Computer Science and Cognitive Science at RPI). Program Chairs are Dr Katharina Kinder-Kurlanda (GESIS) and Professor Paolo Boldi (Univ Milano).

Save the dates!

  • Notify intention to submit: 1 March 2017
  • Submit papers: 8 March 2017
  • Submit extended abstracts: 1 May 1 2017
  • Conference: 26 – 28 June 2017
  • Workshops: 25 June 2017

The Science of Magic

Troy, N.Y. – An interdisciplinary team of researchers at Rensselaer Polytechnic Institute is collaborating with Walt Disney Imagineering Research & Development, Inc., part of the theme park design and development arm of The Walt Disney Company. Together, they are exploring how the cognitive computing technology being developed at Rensselaer can help enhance the experience of visitors to Disney theme parks, cruise ships and other venues.

Walt Disney Imagineering Research & Development, Inc. and Rensselaer researchers are exploring a range of cognitive computing technologies. These include information extraction techniques to help computers better understand words written or spoken by a human, as well as agent-based techniques for investigating how computers and humans can engage in more natural conversations.

“Walt Disney Imagineering Research & Development, Inc. is part of the creative force behind the iconic Disney attractions and experiences and is on the forefront of natural interactive character-based experience technologies. Walt Disney Imagineering Research & Development, Inc. has a rich history of creating, developing and bringing to life ground breaking technologies in the field of Audio-Animatronics® Figures. Rensselaer is a world-class research university and a leading force in computational science and engineering, including in the emerging field of cognitive computing. The possibilities of what we can accomplish together are endless,” said Jonathan Dordick, vice president for research at Rensselaer.

“Walt Disney Imagineering Research & Development, Inc. is excited to partner with Rensselaer, a recognized leader in knowledge extraction and natural language understanding. We believe Rensselaer’s world class text and language processing tools, in conjunction with Walt Disney Imagineering Research & Development, Inc.’s cutting edge autonomous character platforms, will enable a new class of Guest/character experiences,” said Jonathan Snoddy, R&D Studio Executive at Walt Disney Imagineering Research & Development, Inc.

Leading the project for Rensselaer is James Hendler, Tetherless World Senior Constellation Professor and director of The Rensselaer Institute for Data Exploration and Applications (IDEA). An expert in web science, Big Data, and artificial intelligence, Hendler said the collaboration with Walt Disney Imagineering Research & Development, Inc. is an important step forward for all of the data-related research taking place as part of The Rensselaer IDEA. Rensselaer faculty members Mei Si, assistant professor in the Department of Cognitive Science, and Heng Ji, the Edward P. Hamilton Development Chair and associate professor in the Department of Computer Science, will collaborate with Hendler on the project.

“Unstructured data, that is the information inherent in written texts and spoken dialog, is an increasingly important part of the Big Data landscape,” Hendler said. “Our goal in this project is to work with Walt Disney Imagineering Research & Development, Inc. to transform the leading-edge tools and techniques into fully developed applications that will help make the Disney experience even more enjoyable for people and families around the world. We look forward to an incredible collaboration with Walt Disney Imagineering Research & Development, Inc.”

Contact: David Brond, Rensselaer Polytechnic Institute
Office: (518) 276-2800
Email: brondd@rpi.edu

– See more at: http://news.rpi.edu/content/2015/03/19/science-magic-rensselaer-and-walt-disney-collaborate#sthash.IK76gZ0d.dpuf

Jim Hendler on Social Media and Collective Intelligence

Interview with Jim Hendler on Social Media and Collective Intelligence

(This interview was first Published online: 12 December 2012 in the German Journal on Artificial Intelligence)

Jim Hendler James A. Hendler is the Tetherless World Professor of Computer and Cognitive Science and the Head of the Computer Science Department at RPI. He also serves as a Director of the UK’s charitable Web Science Trust and is a visiting Professor at DeMontfort University in Leicester, UK. Hendler has authored about 200 technical papers in the areas of Semantic Web, artificial intelligence, agent-based computing and high performance processing. One of the early “Semantic Web” innovators, Hendler is a Fellow of the American Association for Artificial Intelligence, the British Computer Society, the IEEE and the AAAS. In 2010, Hendler was named one of the 20 most innovative professors in America by Playboy magazine and was selected as an “Internet Web Expert” by the US government. He is the Editor-in-Chief emeritus of IEEE Intelligent Systems and is the first computer scientist to serve on the Board of Reviewing Editors for Science. Hendler was the recipient of a 2012 Strata Conference Data Innovation Award for his work in Open Government Data.

KI: We see a very broad interpretation of Social Media these days. How do you define Social Media?
In the real world people form communities of various kinds that are bound together by social relationships—those relationships are the real-world “social networks” that social-networking sites rely on. Personally, I see social media as the online means for realizing, extending, maintaining, etc. those networks through new computing technologies.
KI: Regarding this definition, how would you put Collective Intelligence in that context?
Continuing that analogy, collective intelligence is what arises as the online communities grow and become “utile.” This can result in systems like Wikipedia, where a combination of openness and governance, coupled with expertise and interest has produced something exciting. On the other hand, without tools and rules it would digress into a hopeless mess. One of the things I’m excited about is improving the tools both for helping to produce, and also to better understand, collective intelligence in many more systems.
KI: Please tell us your personal assessment of these fields of research. What are your observations on the changes and trends these fields have gone through?
I’m excited by much of the work I see, but frankly I think we have a long way to go. To date, a lot of the work has been looking at well-studied phenomena in off-line communities and seeing how those effects play out in online communities. That is great work, but one starts to see hints of new things happening as these communities are growing, and as a new generation that texts as naturally as talks comes online. We see an entire generation which has less binding to physical location, had different ideas about privacy and social mores, and which will inherit the societies we all live in. Understanding the differences in online communities, rather than the commonalities, and looking at how these things help shape the real world, more than the other way around, is a change that is just starting.
KI: What are future research challenges in this area?
Lots and lots, but I’d start with inter-disciplinarity as the key challenge to researchers. I recently was at a large faculty gathering where computer scientists, digital humanities researchers, and social scientists, all managed to use the same words to talk past each other. Further, there was surprisingly little awareness of others’ literatures. Social scientists who study online communities were surprised to learn about “human computing” (the work Luis von Ahn and others do), and computer scientists had no clue that incentives, motivation and social affect were well-studied areas of literatures they’d never seen. Worse, in my opinion, was that most of them didn’t even realize they were talking past each other. I think it is great that some of the top social media researchers are getting huge followings, but until these communities can understand each other, I don’t think we’ll really develop the kinds of understanding we need.
KI: Social Media and Collective Intelligence are often mentioned in context of the fourth paradigm (data-intensive scientific discovery). What is your opinion regarding this?
I think that the tools needed for large-scale data analysis are helpful in enabling a new generation of social scientists (and many others) explore issues at scale that could previously only be looked at in microcosms. However, that alone isn’t enough—the key is figuring out how to ask the right questions, build the right tools, etc.
KI: Also closely connected is the topic of Big Data. What challenges and opportunities do you see in the application of Big Data in the context of Social Media and Collective Intelligence?
My colleague Nigel Shadbolt recently summed up one of my talks in a tweet that said “ ‘Big Data’ is so 2011—enter the era of ‘Broad Data.’ ” To me the real challenge isn’t data mining and data analytics, it’s finding ways to collect and understand the important data that explains what is happening. Facebook graphs and Twitter re-tweet networks can be analyzed to show us what is happening online, but not yet to show us how these effects are really changing the world. The Arab Spring, the SOPA “Internet blackout,” and the way social media are being manipulated for political gain, as a few examples, are the kind of things we must understand. I just don’t see ‘Big Data’, in and of itself, getting us there.
KI: R&D and Innovation has already opened itself to include customers and other stakeholders outside of the company. What is your opinion on the inclusion of Collective Intelligence in the Innovation process as the next step?
I think it is currently too much of a fad. Until we understand it a lot better, we’ll be unable to tell the “wisdom of the crowd” from the “madness of the mob”.
KI: Before you mentioned “tools and rules,” can you elaborate?
If we look at Wikipedia, we see that, as it has grown, it has needed to evolve the rules that govern the collective. Further, the Wikipedia infrastructure uses a bunch of different tools including data miners, software robots, and history visualizers to make those rules effective. The media Wiki code is open, but the rules are not in machine-readable forms and the tools are not available. The same applies to many of the other large-scale social media platforms and applications. I think a great challenge to technologists is how to make these things declarative and available to larger audiences.
KI: There is some initial research on semantics based on Social Media. What are the opportunities you see connecting these two areas?
Here are a few ideas: Making tools and rules more available needs to build on AI and Semantic Web technologies. Current technologies have to evolve into online social machines that integrate human and computer capabilities. (Tim Berners–Lee and I had a paper in AI Journal on this a couple of years ago.) Another aspect of the way semantics can play on this is helping us move from unlabeled network models to more complex graphs that can give us more specific capabilities for analyzing the important aspects of online communities I referred to earlier. The move by Facebook to bring more relationships into their open graph protocol (OGP) shows that they see the need for more of this. As academics we should also be exploring this idea. Additionally, we need to be able to create some semantic components that let us make inferences that can, for example, bridge networks by recognizing common people and elements, that can let us recognize categories of terms and topics, and that can otherwise help us do more sophisticated analyses on these ever more complex emerging graphs.
KI: Do you think future intelligent agents could be using Social Media to further develop based on the wisdom of the crowd?
Actually, I take a very different spin on this question. To me the real exciting thing is when we can put more and more capable agents into the hands of people working together to solve really hard problems. The ability to help collective intelligence be applied to real-world problems and to increase its reach requires the tools I mentioned before. And the tools we need are agents that can support the development and evolution of social machines.
KI: How does the curriculum of Web Science differ compared to that of i-Schools and IS?
While there’s obviously overlap between these areas, the emphases of the Web Science work at different schools, and in different departments, will vary. At some schools I would expect the i-Schools to be leaders in this area, at others it may be a computer science department, IT program or communications program. Some places, like RPI in the US, Southampton in the UK, and KAIST in Korea are starting to give degrees that include Web Science, generally in interdisciplinary programs. Given the inherently multidisciplinary nature of the pursuit, I think there is room under many departments for the work. I like the analogy of “climate science:” One doesn’t expect to find a climate science department at most places, rather, different schools have different areas of expertise that feed into the overall study of the climate. If we analogize the Web to the computing “climate,” we see that one must understand the underlying principles, the engineering involved, and the social effects. All of these things impact each other in massively complex ways, and our goal is to start getting a more principled understanding of what is going on!
KI: Thank you very much for this interview!