Guests of WWSSS’17 can attend presentations on the biggest topics in web science, such as web-data analysis, computational social science in solutions for major societal issues, social research with online-experiments, introduction to urban studies, etc. They can also familiarize themselves with workbooks on processing, analysis and methods of handling data, with poster presentations. Participants will hone their ability to work in a cross-disciplinary team.

Since the School’s main goal is the promotion of cross-disciplinary communication and collaboration, the student teams are formed from representatives of different fields of science. Together, with the help of supervisors, they will work on specific tasks related to a given set of data. All teams will present the results of their work on the last day of Summer School.  The participants will also have to present their current research during a poster session and various discussions.

According to the organizers, one does not need to be knowledgeable in all of the School’s topics to participate, but they do, however, need to be interested in the latest trends in web science and in acquiring datamining skills. They also need to be able to work in a team.

“Most of our students come from Europe; there are some from China, South Korea, India, USA, Mexico and Chile. The participants have been given twenty different sets of data to choose from, all related to the topic of web science. These topics are quite diverse and, at times, surprising – for example, one concerns the results of research into online behavior of sex workers. Most data sets are, of course, less “spicy”. Each set is accompanied by suggestions on what kind of work they can be used for. The teams pick what they like best. Then we select the teams based on their preferences and their areas of expertise. But the teams need to flesh out their projects by themselves. Some will try to determine how obese people differ from near-anorexic people in terms of the places they visit, while others will examine the changes in vocabulary and tone in descriptions of particular topics in press and media. The teams are made up of six people on average. The groups are also assigned mentors, although some mentors work with several teams at once. On the one hand, the students don’t have that much time: it’s less than a week and in that time they will need to attend the lectures and still have time to enjoy the summer in St. Petersburg. On the other hand, they have been carefully selected, so we hope that on the final day we will see projects that combine modern data analysis methods with in-depth subject-specific insight. We expect to see the results of the more successful projects published sometime later,” – explains Andrey Filchenkov, associate professor at ITMO’s Computer Technologies Department and one of the School’s organizers.

For its first three days, the Summer School was held on ITMO University’s premises. In that time, seven speakers from various countries gave talks on web science. One of the speakers was Chua Tat-Seng, Director of the Extreme Search Center at the National University of Singapore. As part of his talk “From Image to Video”, prof. Tat-Seng told the audience about the current era of unprecedented evolution of deep learning techniques.


Chua Tat-Seng

“The Holy Grail of artificial intelligence – the problem of combining the visual and the textual – has probably shown the most promise in recent years. Today’s algorithms are superior to people in terms of solving massive visual recognition tasks and can describe images and video clips or even answer questions about them using natural language – all of these things couldn’t be dreamed of even a few years ago,” – says Chua Tat-Seng.

People have worked on techniques of recognition through machine learning for a long time trying to bring it to perfection. Systems usually can recognize speech and images; such technology is used in many areas, from calorie-counting apps to criminal science. But researchers still struggle with teaching machines to recognize moving objects on video; due to lack of constancy, machines have difficulty recognizing intersecting objects. The Singaporean professor’s work concerns analysis of multimedia content: his team is moving away from static image recognition towards recognition in video and further use of VR/AR technologies.

The success and, of course, use of image analysis makes people’s lives easier in many aspects, but, as opposed to video analysis, its use is fairly limited. One example of the use of machine recognition in everyday life is the web service Visenze where users can upload a picture of a dress, bag or a pair of sneakers that they like, after which the system will list similar products and where they can be bought.

Well, what is the advantage of video over image and why does Chua Tat-Seng urge us to direct all effort towards research in video recognition? The reason is that the video format offers a more fundamental understanding of its contents: objects change and intersect; therefore there is a wider range of application for such technology. In a video recording, the visual relationship between objects and subjects can change with time, while images are fixed and cannot tell a complex story. In addition, video provides a better function set for VR integration. In other words, to move beyond simple media content that provides complex answers to questions, people need to abandon implication (images) in favor of relation (video). Today’s developers have already taught machines to identify objects on video, form connections between them and describe their movements; now they have to focus on studying the role of language and linguistic training aids in making connections between objects and to use the existing inventions as a basis for future applications.