Milan-based deeptech AI startup Emotiva, has successfully raised €610 million in a seed round led by LVenture Group as well as other undisclosed angel investors. The company will make good use of this funding mainly focusing on an accelerated expansion nationally.
Founded in 2017 by Andrea Lori, Andrea Sempi and Lorenzo Corbo, Emotiva is the deep tech AI company sharpened on emotion recognition. It has developed computer vision and machine learning algorithms to analyze real-time people’s emotional responses, by measuring micro facial expressions through a standard webcam, to collect data insights useful to understand human behaviors.
The company has also developed EmPower, a SaaS platform to analyze emotional reactions while people are watching video content in a simple, fast, and scalable way. Through EmPower, companies can make accurate decisions about marketing campaigns, using attention and emotional engagement as new metrics to evaluate content performance, correct targeting and avoid waste of budget.
Luigi Mastromonaco, Head of Investments & Growth of LVenture Group stated: “Deep tech is today the new global frontier of innovation, with investments exceeding $ 60 billion in 2020 and estimated to triple by 2025. Emotiva has developed a cutting-edge technology based on artificial intelligence that recognizes human emotions, a crucial driver in the decision-making process, considering how more than 90% of our behaviors are determined by them. Nowadays, it is crucial for companies to measure and analyse these high added-value data, to gain a deep understanding of consumers, thus optimizing processes and avoiding the waste of economic resources”
Emotiva fulfils the need for companies to understand more deeply the consumers’ decision-making process, making affordable complex and not easily accessible data. A growing trend confirmed by the startup’s results, which in the first half of 2021 has more than quadrupled the turnover.
Source: Eu-Startups “Milan-based AI startup Emotiva raises €610K to analyse people’s emotional responses in real-time”