The National Science Foundation (NSF) is spending nearly $1 million for a study that will "detect harassing messages" on Twitter in order to provide "safe social interactions" online.
The study, led by researchers at Wright University, began this August. It intends to curb cyberbullying, whether at school or in the workplace.
"As social media permeates our daily life, there has been a sharp rise in the use of social media to humiliate, bully, and threaten others, which has come with harmful consequences such as emotional distress, depression, and suicide," according to the grant for the project. "The October 2014 Pew Research survey shows that 73 [percent] of adult Internet users have observed online harassment and 40 [percent] have experienced it. The prevalence and serious consequences of online harassment present both social and technological challenges."
The study involves looking at "power relationships" between online social media users to help identify if someone is being harassed.
"This project identifies harassing messages in social media, through a combination of text analysis and the use of other clues in the social media (e.g., indications of power relationships between sender and receiver of a potentially harassing message.)," the grant said. "The project will develop prototypes to detect harassing messages in Twitter; the proposed techniques can be adapted to other platforms, such as Facebook, online forums, and blogs."
Researchers hope to be able to identify "the generic language of insult," which they described as profanities and offensive language.
The study will involve high school and college students to "ensure wide impact of scientific research on the support for safe social interactions."
Amit Sheth, a computer scientist at Wright State University in Ohio who is leading the project, told the Washington Free Beacon "developing a solution for detecting harassment is very challenging."
"We have a multidisciplinary team of computer scientists, social scientists, and are also working with high school teachers and students," he said. "We will be doing extensive data collection, use cases, etc., and evaluating research outcomes with high school students and working professionals."
Sheth said the goal of the research is to develop an open source tool that can be used by parents.
The project’s researchers will use text mining, natural language processing, and analysis of social media to "determine and evaluate potential harassment and harassers."
The website goes into further detail regarding the difficulty of automatically detecting harassing remarks on social media.
"An apparent ‘bullying conversation’ between good friends with sarcastic content presents no serious threat, while the same content from an identifiable stranger may function as harassment," the website states. "Content analysis alone cannot capture these subtle but important distinctions."
The study will attempt to address this issue by using algorithms that can detect the context in which online harassment takes place.
Other goals of the project include detecting "harassing social media accounts automatically," and educating teenagers about cyberbullying.
Research will continue until August 2018, and has cost taxpayers $925,104 so far.
The NSF has funded similar projects in the past. Rutgers University received $117,102 earlier this year to automatically detect cyberbullying messages online in real-time.