Ok, “lightning speed” might be a touch over the top, but one thing is for certain – there has been continued and persistent progress in the field of artificial intelligence since the 1950s. For nearly 70 years, mathematicians, scientists, researchers, and developers have been discovering innovative and creative ways to leverage the versatility of artificial intelligence. One of the latest advancements in the last few years is undoubtedly the introduction of AI bots.
At the beginning of 2016, Facebook introduced its bot platform for Messenger, and users far and wide were starting to use AI bots to make purchasing decisions, complete financial transactions, and engage instantly in two-way communication among other uses. In fact, we are all – to one degree or another – interacting with bots on a daily basis whether we are aware of it or not. And while the use of AI bots might now be classified as “old school” in the tech world, there are some developers who are breathing new life and purpose into how we interact with this technology. One new and exciting area involves the policing of workplace harassment.
What constitutes sexual harassment?
Sexual harassment within the workplace can take on many forms such as:
In spite of some management efforts to proactively deal with this issue, the prevalence of workplace sexual harassment continues to be widespread in virtually every sector and at every employment level. Several studies indicate that approximately 25 percent of all women within the workforce have experienced workplace sexual harassment – and the language services industry is no exception.
In a recent Nimdzi study, 53.3 percent of women within the language services industry felt discriminated against because of their gender or sexuality.
It is only fair to note that in the same survey, 35 percent of women and 43 percent of men claimed to be “unsure” whether or not they themselves had ever discriminated against anyone due to their gender or sexual orientation.
These statistics should be setting off alarm bells. We are nearly 20 years in to the 21st century and still, we are faced with report after report of political powerhouses, heads of corporations, and coworkers who help to foster intimidating – and sometimes downright hostile – working environments all in the name of gender and sexual orientation. Why does this go on?
One predominant reason might fall squarely in our collective history with “men” going out to work and “women” caring for the household and the young. Perhaps there lies within this culture, a subconscious belief that this should still be the rule, and this subconscious belief leads to a power struggle when women compete in the workplace – hardly an excuse, but perhaps one of many reasons this ugliness ensues. And since sexual harassment is a form of intimidation, women feel stressed, embarrassed, and fearful, and will often stay silent to avoid dealing with the situation altogether.
The fear of retaliation, of losing one’s position, one’s job, and one’s reputation can all play a hand in survivors not stepping forward. By and large, those who have experienced workplace harassment choose not to report their experiences to their human resources department and are too intimidated to speak to someone of authority within their organization.
Up to this point, if you have thought of sexual harassment as predominantly affecting women, think again. Men, although to a lesser degree, have also stepped out of the shadows to report discrimination. Gay men in particular, and men of racial or ethnic minorities have reported workplace discrimination against their race or sexual orientation.
So, what can employers do to help ease these fears and encourage those affected by workplace harassment to step forward? Along came Spot…
Spot is a free web-based tool that any employee can access. It allows employees to anonymously report inappropriate behavior. This AI bot interacts with users, prompting them to share their experiences in a confidential manner.
Using “cognitive interview” techniques, Spot is able to ask practical yet neutral questions such as the location and time of the incident, individuals who were involved, and whether or not there were any witnesses. Once the Q&A session is complete, Spot will convert the information collected into a PDF that can then be printed and/or submitted electronically to one’s HR department or to anyone the survivor chooses to receive the report.
Are employees likely to interact with AI bots about harassment?
People may feel more comfortable interacting with a bot than with a human when it comes to such sensitive matters. There is a certain stigma that people feel about reporting incidents but if they are able to talk anonymously to a bot, they may be more inclined to come forward.
Other bonuses to using Spot include its high-level of confidentiality. Once the interview is completed, Spot will allow the user to edit the answers in any way he or she sees fit. It will then email the user a PDF that has been securely signed and timestamped. Users are free to send the PDF to human resources, send to anyone of their choosing, or even keep the document for their own records. If they choose to forward it, the email is actually sent from Spot’s email server – the user’s identity remains fully protected. If the user ends the session or closes the browser or tab, there will be no chat history – no one can revisit the user’s comments.
Since its launch at the end of 2017, Spot has been able to chat with hundreds of users, and its Talk To Spot site has received thousands of visits. Although the team continues its dedication to ensuring exceptional UX, they have their eyes on the future – marketing AI-driven management systems to companies. As early as this summer, the team hopes to have a tool that companies can use to analyze, track, and manage reports of harassment and discrimination.
The “Me too” movement is a force to be reckoned with. With the wave of women stepping forward in light of recent widely covered allegations of sexual misconduct, Spot is not alone. Callisto, a non-profit organization that develops technology to combat sexual assault and harassment, offers college students a confidential and effective means to report sexual discrimination and harassment. This year, Callisto is also expanding into the professional realm with technology that empowers users with tools to report crimes, and also offers a matching system when the same perpetrator is identified by more than one survivor.
STOPit is a mobile app for both survivors and administrators that not only allows a confidential reporting channel but also a two-way communication channel with tools to facilitate and conduct investigations.
Botler.ai, a Montreal-based technology, uses natural language processing to help users determine whether or not an incident classifies as sexual harassment or any other sex crime under the US criminal code or under Canadian law. The idea is to strengthen a survivor’s confidence in reporting by equipping him or her with established, legal doctrine. Botler.ai is also able to generate incident reports.
It seems as though perpetrators of sexual misconduct don’t just target humans. It has likewise become apparent that some programmers and developers too, require specialized training in how to program their AI bots when it comes to receiving unprovoked, unwanted, and inappropriate comments of a sexual or gender-centric nature.
But it goes further than that. In light of recent reports, some developers may not just require training in how to program their bots, but may require training with regard to their own reactions to sexual and gender-centric harassment. The current programmed responses uttered by some AI bots indicates a dismissive reaction to sexual misconduct, and in some cases, the AI reaction to discriminatory language is a flat-out offensive.
A recent petition put forth by Care2.com is urging that both Amazon’s Alexa and Apple’s Siri be reprogrammed to more proactively deal with inappropriate language of a sexual or discriminatory nature.
Currently, when Alexa, Siri, or even Cortana are directly confronted with misogynistic words or phrases, they have been programmed to be passive-aggressive at best with lines like, “Now, now”, or “Well, that’s not going to get us anywhere.” But the most shocking is the following exchange between Siri and a human:
In light of the “me too” movement, and in solidarity with all women, these pre-programmed responses are demonstrably offensive. As of December of 2017, the petition has received over 8,000 signatures – more than 80 percent of their 10,000-signature goal.
One other question worth pondering – why are the vast majority of these AI bots labelled with female voices and feminine names? Food for thought.
AI bots continue to advance in ways that meet societal needs, but the true work lies in teaching our young what respect means. We need to introduce these lessons often and repeat them frequently, but we cannot stop there.
We need to engage in meaningful dialogues at home, at work, at play, online, in travel, and in virtually every circumstance. We need to empower one another with the tools to not only protect ourselves against these atrocities but to stop them in their very tracks.
AI bots will undoubtedly prove helpful in this endeavor but once again, the robots cannot save us. Humans must step up and end sexual harassment and discrimination in the good old-fashioned way – human solidarity, arm in arm, hand in hand.
Together, let us work to STOP discrimination of every kind.
Gender and inclusion continue to be hot topics both in social spheres and in the workforce. Although women’s movements and social activist efforts have worked to narrow the gap, some areas seem to indicate a widening of the gap. In this report, we examine how the language services industry stacks up to this index. […]
The Nimdzi Language Technology Atlas maps over 800 different technology solutions across a number of key product categories. The report highlights trends and things to watch out for. This is the only map you will ever need to navigate your way across the language technology landscape.