Posted on

Last week I gave another talk in our Weekly AI meeting on the topic of ControCurator. This is a project that I am currently working on, which has the goal to enable the discovery and understanding of controversial issues and events by combining human-machine active learning workflows.

In this talk I went into the different aspects of controversies, which we have identified in this project. You can view the slides here:

Posted on

Our ControCurator paper abstract titled “ControCurator: Understanding Controversy Using Collective Intelligence” has been accepted at Collective Intelligence 2017. In this paper we describe the aspects of controversy: the time-persistence, emotion, multiple actors, polarity and openness. Using crowdsourcing, the ControCurator dataset of 31888 controversy annotations was obtained for the relevance of these aspects to 5048 Guardian articles. The results indicate that each of these aspects is a positive indicator of controversy, but also that there is a clear difference in their signal strength. Most notably, the emotion was found to be the highest indicator. Though, all the measured controversy aspects were found to positively correlate with controversy. These results suggest that the controversy model is accurate and useful for modeling controversy in news articles.

The full dataset with controversy annotations is available for download at https://github.com/ControCurator/controcurator-corpus/releases/tag/1.0

Posted on

Our demo of ControCurator titled “ControCurator: Human-Machine Framework For Identifying Controversy” will be shown at ICT Open 2017. In this demo the ControCurator human-machine framework for identifying controversy in multimodal data is shown. The goal of ControCurator is to enable modern information access systems to discover and understand controversial topics and events by bringing together crowds and machines in a joint active learning workflow for the creation of adequate training data. This active learning workflow allows a user to identify and understand controversy in ongoing issues, regardless of whether there is existing knowledge on the topic.

Posted on

Yesterday at the Computable Awards the Vrije Universiteit, University of Amsterdam and IBM won the prize for “ICT project of the year in education” with the Watson Innovation Course. Furthermore, the project was highest rated across all nominees of all prize categories. The course is ongoing at the moment for the second time, with an improved setup and new state of the art tools for the students.

The course is run by Lora Aroyo, Anca Dumitrache, Benjamin Timmermans and Oana Inel from the VU, and Robert-Jan Sips and Zoltan Szlavik from IBM. In the course the students were challenged by Amsterdam Marketing to solve the issue of the increasing overcrowdedness of tourists in the city center of Amsterdam. The city is culturally rich with many places to visit, yet most visitors cluster around a limited set of popular locations. The students came up with ideas to motivate visitors to spread in the city and provide them with relevant information for their visit.

computable-award

Posted on

Today I gave a talk in our Weekly AI meeting on the topic of ControCurator. This is a project that I am currently working on, which has the goal to enable the discovery and understanding of controversial issues and events by combining human-machine active learning workflows.

In the talk I explained the issue of defining the space of a controversy, and how this relates to for instance wicked problems. You can see the slides below.

Posted on

Brainstem tumors are a rare form of childhood cancer for which there is currently no cure. The Semmy Foundation aims to increase the survival of children with this type of cancer by supporting scientific research. The Center for Advanced Studies at IBM Netherlands is supporting this research by developing a cognitive system that allows doctors and researchers to quicker analyse MRI-scans and better detect anomalies in the brainstem.

In order to gather training data, a crowdsourcing event was held at the festival Lowlands, which is a 3-day music festival that took place from 19-21 August 2016 and welcomed 55k visitors. At the science fair, IBM had a booth that hosted both this research and showcase of the Weather stations of the Tahmo project with TU Delft.

screenshot

In the crowdsourcing task, the participants were asked to draw the shape of the brainstem and tumor in an MRI scan. Gathering data on whether a particular layer of a scan contains the brainstem and determining its size should allow a classifier to recognize the tumors. Furthermore, the annotator quality can be measured with the CrowdTruth methodology by analysing the precision of the edges that were drawn in relation to their alcohol and drug use that we collected. The hypothesis is that people under influence can still make valuable contributions, but that these are of lower quality than sober people. This may make the reliability of online crowd workers more clear, because it is unknown under what conditions they make their annotations.

heatmap

The initial results in the heatmap of drawn pixels give an indication of the overall location of the brainstem, but further analysis will follow on the individual scans in order to measure the worker quality and generating 3d models.

Posted on

From 2 to 16th of July we organized the Big Data in Society Summerschool at the Vrije Universiteit Amsterdam. As part of our Collaborative Innovation Center with IBM, we presented an introduction of the technical and theoretical underpinnings of IBM Watson and discussed the use of big data and implications for society. We looked at examples of how the original Watson system can be adapted to new domains and tasks, and presented the CrowdTruth approach for gathering training and evaluation data in this context. The participating students, which ranged from bachelor to PhD level, said they learned a lot from the lectures and found the practical hands-on sessions very useful.

Posted on

From May 29th until June 2nd 2016, the 13th Extended Semantic Web Conference took place in Crete, Greece. CrowdTruth was presented by Oana Inel presenting her paper “Machine-Crowd Annotation Workflow for Event Understanding across Collections and Domains” and by Benjamin Timmermans presenting his paper “Exploiting disagreement through open-ended tasks for capturing interpretation spaces”, both in the PhD Symposium.

The Semantic Web group at the Vrije Universiteit Amsterdam was very well represented, with plenty of papers during the workshops and the conference. The paper on CLARIAH by Rinke, Albert, Kathrin among others won the best paper award at the Humanities & Semantic Web workshop. Here are some of the topics and papers that we found interesting during the conference.

EMSASW: Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web
In the Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web a keynote talk was given by Hassan Saif titled “Sentiment Analysis in Social Streams, the Role of Context and Semantics”. He explained that sentiment analysis is nothing more than extracting the polarity of an opinion. Through the Web 2.0 the sharing of opinions has become easier, increasing the potential of sentiment analysis. In order to find these opinions first opinion mining has to be performed, which is an integral part of sentiment analysis. Hassan compared several semantic solutions for sentiment analysis: SentiCircles which does not rely on the structure of texts but semantic representations of words in a context-term vector; Sentilo which is an unsupervised domain-independent semantic framework for sentence-level sentiment analysis; sentic computing, a multi-disciplinary tool for concept-level sentiment analysis that uses both contextual and conceptual semantics of words and can result in high performance on well structured and formal text.

Jennifer Ling and Roman Klinger presented their work titled “An Empirical, Quantitative Analysis of the Differences between Sarcasm and Irony”. They explained the differences between irony and sarcasm quite clearly. Irony can be split up into verbal irony which is the use of words for a meaning other than the literal meaning, and situational irony which is a situation where things happen opposite of what is expected. They made clear that sarcasm is ironic utterance, designed to cut or give pain. It is nothing more than a subtype of verbal irony. In tweets, they found that ironic and sarcastic tweets contain significantly less sentences than normal tweets.

PhD Symposium
Ghiara Ghidini and Simone Paolo Ponzetto organized a very nice PhD Symposium. They took care to assign for each student mentors that work in related domains and this made their feedback highly relevant and valuable. In this sense, we would like to thank to our mentors Chris Biemann, Christina Unger, Lyndon Nixon and Matteo Palmonari for helping us to improve our papers and for providing feedback during our presentations.

It was very nice to see that events present a high interest in the semantic web community. Marco Rovera presented his Phd proposal “A Knowledge-Based Framework for Events Representation and Reuse from Historical Archives” that aims to extract semantic knowledge from historical data in the context of events and make them available for different applications. It was nice to see that projects just as Agora and the Simple Event Model (SEM), developed at VU Amsterdam were mentioned in his work.

Another very interesting research project on the topic of human computation and crowdsourcing in order to solve problems that are still very difficult for computers was presented by Amna Basharat, “Semantics Driven Human-Machine Computation Framework for Linked Islamic Knowledge Engineering“. She envisioned hybrid human-machines workflows, where the skills and knowledge background of crowds and experts, together with automated approaches aim to improve the efficiency and reliability of semantic annotation tasks in specialized domains.

Vocabularies, Schemas and Ontologies
Céline Alec, Chantal Reynaud and Brigitte Safar presented their work “An Ontology-driven Approach for Semantic Annotation of Documents with Specific Concepts”. This is a collaboration with the weather company, where they use machine learning to classify things you can but also cannot do at a venue. This results in both positive and negative annotations. In order to achieve this, domain experts manually annotated documents and target concepts as either positive or negative. These target concepts were based on an ontology on tourist destinations with descriptive classes.

Open Knowledge Extraction Challenge
This year, the Open Knowledge Extraction Challenge was composed of 2 tasks and 2 submissions were selected for each of the tasks.

Task 1: Entity Recognition, Linking and Typing for Knowledge Base population

  • Mohamed Chabchoub, Michel Gagnon and Amal Zouaq: Collective disambiguation and Semantic Annotation for Entity Linking and Typing. Their approach combines the output of Stanford NER with the output of DBpediaSpotlight as ground for various heuristics to improve their results (e.g., filtering verb mentions, merging mentions of a given concept by always choosing the longest span). For the mentions that were not disambiguated, they query DBpedia to extract the entity that is linked to each such mention, while for the entities that have no types, they use the Stanford type and translate it to the DUL typing. In the end, their system outperformed the Stanford NER with about 20% on the training set, and similarly the semantic annotators.
  • Julien Plu, Giuseppe Rizzo and Raphaël Troncy: Enhancing Entity Linking by Combining Models. Their system is build on top of the ADEL system, presented in last year challenge. The new system architecture is composed of various models that are combined in order to improve the entity recognition and linking. Combining various models it is indeed a very good approach since it is very difficult if not almost impossible to choose one model that performs well across all datasets and domains.

Task 2: Class Induction and entity typing for Vocabulary and Knowledge Base enrichment

  • Stefano Faralli and Simone Paolo Ponzetto: Open Knowledge Extraction Challenge (2016): A Hearst-like Pattern-Based approach to Hypernym Extraction and Class Induction. Introduced WebisaDB, a large database of hypernymy relations extracted from the web. In addition, they combined WordNet and OntoWordNet to extract the most suitable class for the extracted hypernyms using the WebisaDB.
  • Lara Haidar-Ahmad, Ludovic Font, Amal Zouaq and Michel Gagnon: Entity Typing and Linking using SPARQL Patterns and DBpedia. As a take home message, their results show a strong need of (1) having a better linkage between the DBpedia resources and the DBpedia ontology and (2) changing some DBpedia resources into classes.

Semantic Sentiment Analysis Challenge
This challenge consisted of two tasks, one for polarity detection of 1m amazon reviews in 20 domains, and one on entity extraction of 5k sentences in two domains.

  • Efstratios Sygkounas, Xianglei Li, Giuseppe Rizzo and Raphaël Troncy. The SentiME System at the SSA Challenge. They used a bag of 5 classifiers in order to classify the sentiment polarity. This bagging has shown to result in a better stability and accuracy of the classification. A four fold cross validation was used while for each sample the ratio of positive and negative examples was preserved.
  • Soufian Jebbara and Philipp Cimiano – Aspect-Based Sentiment Analysis Using a Two-Step Neural Network Architecture. They retrieved word embeddings using a skip gram model that was trained on the amazon reviews dataset. They used the stanford POS tagger with 46 tags. Sentics were received from senticnet resulting in 5 sentics per word: pleasantness, attention, sensitivity, aptitude and polarity. They found that these sentics improve the accuracy of the classification and allow for less training iterations. The polarity was retrieved using SentiWordnet and used as a feature training. The results were limited because there was not enough training data.

IN-USE AND INDUSTRIAL TRACK
Mauro Dragoni presented his paper “Enriching a Small Artwork Collection through Semantic Linking”. A very nice project that highlights some of the issues that small museums and small museums collections encounter: data loss, no exposure, no linking to other collections, no multilinguality. One of the issues that they identified, poor linking to other collections is one of the main goals of our DIVE+ project&system – creating an event-centric browser for linking and browsing across cultural heritage collections. Working with small or local museums is very difficult due to poor data quality, quantity and data management. Attracting outside visitors is also very cumbersome since they have no real exposure and collection owners need to translate the data in multiple languages. As part of the Verbo-Visual-Virtual project, this research investigates how to combine NLP with Semantic Web technologies in order to improve the access to cultural information.

Rob Brennan presented the work on “Building the Seshat Ontology for a Global History Databank”, which is a collection of expert-curated body of knowledge about human history. They used an ontology to model uncertain temporal variables, and coding conventions in a wiki-like syntax to deal with uncertainty and disagreement. This allows each expert to define their interpretation of history. Different types of brackets are used to indicate varying degrees of certainty and confidence. However, in the tool they do not show all the possible values, just the likely ones. Three graphs were used for this model: the real geospatial data, the provenance and the annotations. Different user roles are supported in their tool, which they plan to use to model trust and the reliability of their data.

NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL
In the paper “Towards Monitoring of Novel Statements in the News” Michael Färber stated that the increasing amount of information that is currently available on the web makes it imperative to search for novel information, and not only relevant information. The approach extracts novel statements in the form of RDF triples, where novelty is measured with regard to an existing KB and semantic novelty classes. One of the limitations of the system, that is considered as future work, is the fact that the system does not consider the timeline. Old articles could be considered novel if their information is not in the KB.
As a side note, we also consider novelty detection an extremely relevant task given the overwhelming amount of information available, and we made the first steps in tackling this problem by combining NLP methods and crowdsourcing (see Crowdsourcing Salient Information from News and Tweets, LREC 2016).

The paper “Efficient Graph-based Document Similarity” by Christian Paul, Achim Rettinger, Aditya Mogadala, Craig Knoblock and Pedro Szekely deals with assessing the similarity or relatedness among documents. They rank documents based on their relevance/similarity by first performing a search for surface forms of words in the document collection and then looking for co-occurrences of words in documents. They integrate semantic technologies (DBpedia, Wikidata, xLisa) to solve problems arising due to language ambiguity: dealing with heterogenous data (news articles, tweets), poor or no metadata available for images, videos among others.

Amparo E. Cano presented the work on “Semantic Topic Compass – Classification based on Unsupervised Feature Ambiguity Gradation”. For classification they used lexical features such as ngrams, entities and twitter features, and also semantic features from dbpedia. The feature space of a topic is semantically represented under the hypothesis that words have a similar meaning if they occur in a similar context. Related words for a given topic are found using wikipedia articles. They found that enriching the data with semantic features improved the recall of the classification. For evaluation three annotators classified the data, where data on which they did not agree was removed from the dataset.

SEMANTIC DATA MANAGEMENT, BIG DATA, SCALABILITY
“Implicit Entity Linking in Tweets” by Sujan Perera, Pablo Mendes, Adarsh Alex, Amit Sheth and Krishnaprasad Thirunarayan – is a new approach of linking implicit entities by exploiting the facts and the known context around given entities. To achieve this, they use the temporal factor to disambiguate entities that are present in tweets, i.e., identify domain entities that are relevant at the time t.

Keynotes
On Tuesday, Jim Hedler gave a keynote speech titled “Wither OWL in a knowledge-graphed, Linked-Data World?”. The topic of the talk was the question whether OWL is dead or not. In 2010 he claimed that semantics were coming to search. Some of the companies back then like Siri had success, but many did not. SPARQL has been adopted in the supercomputing field, but they are not yet a fan of RDF. Many large companies are also using semantic concepts, but not OWL. They are simply not linking their ontologies. Schema.org is now used in 40% of google crawls. It is simple, and this is good because it is used in 10 billion pages. It’s simplicity keeps the use consistent.
Ontologies and owl are like sauron’s tower. If you let one inconsistency in, it may fall over completely. The RDFS view is different: it does not matter if things mean different things, it is jut about linking things together. In the Web 3.0 there are many use cases for ontologies in web apps at web scale. There is a lof of data but few semantics. This explains why RDFS and SPARQL are used but not why OWL is not. The problem is that we cannot talk about the right things in OWL.

On Thursday, Eleni Pratsini – Lab Director, Smarter Cities Technology Center, IBM Research – Ireland had a keynote on “Semantic Web in Business – Are we there yet?”. Her work focuses on advancing science and technology in order to improve the overall cities’ sustainability. Applying semantic web in smart cities could be the main way to understand the city’s needs and further empowering it take smart decisions over the population and the environment.

We both pitched our doctoral consortium papers at the minute of madness session and presented it in the poster session. You can read more about Oana’s presentation here, and Benjamin’s presentation here.

By Oana Inel and Benjamin Timmermans

Posted on

I presented my doctoral consortium paper titled “Exploiting disagreement through open ended tasks for capturing interpretation spaces” at the PhD Symposium of ESWC 2016.

An important aspect of the semantic web is that systems have an understanding of the content and context of text, images, sounds and videos. Although research in these fields has progressed over the last years, there is still a semantic gap between data available of multimedia and metadata annotated by humans describing the content. This research investigates how the complete interpretation space of humans about the content and context of this data can be captured. The methodology consists of using open-ended crowdsourcing tasks that optimize the capturing of multiple interpretations combined with disagreement based metrics for evaluation of the results. These descriptions can be used meaningfully to improve information retrieval and recommendation of multimedia, to train and evaluate machine learning components and the training and assessment of experts.

IMG_20160602_092120