Purpose and goal
Easit wants to develop a system to determine, based on a text, whether it contains gross language and / or other inappropriate content, such as signs of harassment. This feature could be linked to our case management system, Easit GO, but just as well as other information flows, such as incoming and outgoing e-mails from an organization.
Expected results and effects
The benefit of the solution would be to protect both the sender and the receiver in exchange of information. By notifying a third (educated) person, escalation of bad behavior can be avoided with a safer working environment as a result. Our hope is that this project has interesting challenges as it can tackle several areas of machine learning, such as sentiment analysis and classification. A system like this would have a great potential and as far as we know nothing that exists on the market in as general a form as we envisaged.
Planned approach and implementation
Easit is working in an agile way in iterative sprints. The work is constantly evaluated and adjusted during the progress. The work will be divided into four phases; Start-up and analysis, Development, Further development and Evaluation and completion. Exactly which method we are going to use will be further investigated in the start-up phase of the project. More generally, we aim for a hybrid classifier that may consist of lexicographic ranking of incoming words, in combination with one or more neural networks trained on communication examples labeled as appropriate or inappropriate.