Econda presenting xLiMe at DMEXCO industry fair in Cologne, Germany on September 14-15th

The xLiMe project - with focus on the EXPLAIN use case - will be presented at the DMEXCO industry fair in Cologne, Germany, September 14th and 15th.

The EXPLAIN use case investigates the generation of product recommendations based on evidence given by Social Media and TV. This enables the automatic detection of trending and popular products in Social Media. These products can then easily be used as relevant and dynamic content on landing pages - e.g. the start page of an online shop. Another scenario is the detection of product images in video streams. This allows picking up new fashion trends from TV automatically and using the matching products again as relevant and dynamic content.

Please talk to Philipp Sorg or Conny Junghans from econda if you are interested. You'll find them at the PIA partner booth, Hall 8 Booth C.051/D.058.

New papers on technical details of the entity summarization (KIT)

Further technical details of the entity summarization approach and detailed insights on computing PageRank on Wikipedia were published recently:

> Andreas Thalhammer, Nelia Lasierra, and Achim Rettinger. LinkSUM: Using Link Analysis to Summarize Entity Data. In Web Engineering: 16th International Conference, ICWE 2016, Lugano, Switzerland, June 6-9, 2016. Proceedings, volume 9671 of Lecture Notes in Computer Science, pages 244–261. Springer International Publishing, Cham, 2016.

> Andreas Thalhammer and Achim Rettinger. PageRank on Wikipedia: Towards General Importance Scores for Entities. In Joint Proceedings Know@LOD and CoDeS 2016, co-located with the 13th Extended Semantic Web Conference, volume 1586 of CEUR Workshop Proceedings. CEUR-WS.org, 2016.

The last paper was selected by the Know@LOD and CoDeS 2016 workshop organizers to be published in an enhanced version in the "ESWC 2016 Satellite Events" proceedings.

Current development at JSI on producing a quality stream of aggregated event information

Within xLiMe one of the strongest focus points for us at JSI is producing a quality stream of aggregated event information on the Event Registry platform built from a stream of multi-lingual online news. For each event, presented as a cluster of articles, we extract a time and a location which of course do not necessarily correspond to the time and location of article publication. Lately we are working hard on expanding these extraction capabilities to obtain infobox-like structured representation of events.

We are developing a machine learning system which extracts structured event data for pre-defined event type templates from events. The system computes features aggregated over all the articles that comprise the event, which offers data redundancy and increased model stability compared to extraction from single article. Furthermore, we use methodology based on canonical correlation analysis to project contextual language features into a language-agnostic space which makes the extraction process cross-lingual.

Results obtained so far on simple templates for event types such as company acquisition (pictured) and earthquake are encouraging and on par with top results obtained in the extraction track of knowledge base population competition at the Text Analysis Conference. In future we aim to extend the experiments to more event types and develop an active learning component for building extractors for new event types.

The Best Demo of the ICWE 2016 is available online now

Last month, we have already reported about Andreas and Achim (both from KIT) winning the Best Demo Award at the ICWE 2016 conference in Lugano:

> Andreas Thalhammer and Achim Rettinger. ELES: Combining Entity Linking and Entity Summarization. In Web Engineering: 16th International Conference, ICWE 2016, Lugano, Switzerland, June 6-9, 2016. Proceedings, volume 9671 of Lecture Notes in Computer Science, pages 547–550. Springer International Publishing, Cham, 2016.

The demo is finally online with a nice use case for the biomedical domain.

Page 3 of 10