Your browser doesn't support javascript. This means that the content or functionality of our website will be limited or unavailable. If you need more information about Vinnova, please contact us.

Shoppable vids - Semi-autonomous video tagging

Reference number
Coordinator Stockholms universitet - Institutionen för data- och systemvetenskap
Funding from Vinnova SEK 1 889 000
Project duration December 2017 - November 2019
Status Completed

Purpose and goal

We developed a demonstrator that allows online viewers of runway to select, shop and share individual garments. It supports semi-automatic detection of objects in recorded forms of such videos from real fashion shows provided by our partners. It is a post-production tool for quick and easy tagging of worn items and accessories on models. The demonstrator includes two basic elements i.e. a deep learning network that provides the vision functionality and a web-based client that provides the tool.

Expected results and effects

Industrial results: There is now an advanced demonstrator available. This demonstrator will continue to be improved. Furthermore, the underlying network can be shown by itself and generate new ideas on how to solve industrially relevant problems. Further, a test data set will be distributed that will increase the partners’ visibility in academia. Research results: The work is part of a Ph.D. project, that will continue 2.5 years. There is also a paper written on the Shoppable Vids project.

Planned approach and implementation

The project was delayed due to a lack of computer vision staffing. We recruited a Ph.D. student and since then, it has moved on with good collaboration. We continued development since the project terminated in November 2019. During the project, we discovered new challenges, derived out of the state of computer vision technology. We, therefore, introduced new manual features in our tool, i.e. a merge “tracklets” feature, and redesigned the interface to understand the manual work that is needed.

The project description has been provided by the project members themselves and the text has not been looked at by our editors.

Last updated 14 February 2020

Reference number 2017-03643

Page statistics