AI and ethics. Samasource and ChatGPT: OpenAI Used Kenyan Workers to Make ChatGPT Less Toxic

*This is a human centric story on Artificial Intelligence (AI) Ethics.*

*An interesting piece by Time magazine on the role and dangers of teaching
AI, where Kenyan workers were used to train ChatGPT models.*

*I have left the graphical parts out. For anyone who want to read through,
the link is shared at the end. But this shows how we are creating
psychologically disturbed workers in our communities. How are these workers
integrated back to the community without being damaged?*

Documents reviewed by TIME show that OpenAI signed three contracts worth
about $200,000 in total with Sama in late 2021 to label textual
descriptions of sexual abuse, hate speech, and violence. Around three dozen
workers were split into three teams, one focusing on each subject. Three
employees told TIME they were expected to read and label between 150 and
250 passages of text per nine-hour shift. Those snippets could range from
around 100 words to well over 1,000. All of the four employees interviewed
by TIME described being mentally scarred by the work. Although they were
entitled to attend sessions with “wellness” counselors, all four said these
sessions were unhelpful and rare due to high demands to be more productive
at work. Two said they were only given the option to attend group sessions,
and one said their requests to see counselors on a one-to-one basis instead
were repeatedly denied by Sama management.

*Is Samasource breaking any local laws by handling data that is proscribed
by several acts in Kenya? Read on.*

In February 2022, Sama and OpenAI’s relationship briefly deepened, only to
falter. That month, Sama began pilot work for a separate project for
OpenAI: collecting sexual and violent images—some of them illegal under
U.S. law—to deliver to OpenAI. The work of labeling images appears to be
unrelated to ChatGPT*.* In a statement, an OpenAI spokesperson did not
specify the purpose of the images the company sought from Sama, but said
labeling harmful images was “a necessary step” in making its AI tools
safer. (OpenAI also builds image-generation
<time.com/collection/best-inventions-2022/6225486/dall-e-2/>
technology.)
In February, according to one billing document reviewed by TIME, Sama
delivered OpenAI a sample batch of 1,400 images. Some of those images were
categorized as “C4”—OpenAI’s internal label denoting child sexual
abuse—according to the document. Also included in the batch were “C3”
images (including bestiality, rape, and sexual slavery,) and “V3” images
depicting graphic detail of death, violence or serious physical injury,
according to the billing document. OpenAI paid Sama a total of $787.50 for
collecting the images, the document shows.

Questions to ask ourselves:

*Who should review AI systems if not humans? *
*How should the humans be treated?*
*What laws govern these new spheres of concern?*

*The silver lining is that Samasource is working to provide a safer
environment for its workers, and to an extension the Kenyan public. It has
terminated contracts from projects that are traumatizing its employees. The
sad bit is that some employees were also laid off due to lost revenue*.

Samasource and ChatGPT: OpenAI Used Kenyan Workers on Less Than $2 Per Hour
to Make ChatGPT Less Toxic
time.com/6247678/openai-chatgpt-kenya-workers/?s=09