Laasya Aki

This is my personal site where I make blog posts, detail my STEM pursuits, and share what I find cool.

View on GitHub
25 August 2021

Meredith Whittaker

by Laasya Aki

Meredith Whittaker - Gulf News

There are many implications of artificial intelligence (AI) in human life today. While the ideas of AI integrations are meant to benefit the world, there are still a lot of problems that it brings. The social aspects of AI haven’t been fully developed, and there are no formal guidelines in place. These AI systems are being integrated into society faster than the ethical constraints can be finalized. Scientists, mathematicians, and engineers agree that it’s essential that guidelines are placed for AI in society. One of these researchers is Meredith Whittaker.

Meredith Whittaker is the Minderoo Research Professor at New York University. She also founded Google’s Open Research Group which tackles major problems in computer science. In 2017, Whittaker and Kate Crawford founded the AI Now Institute, which researches the social implications of AI.

In 2018, Google signed a contract with the Pentagon where the former would create an AI system, known as Project Maven, for the latter. This AI would help improve the interpretation of video imagery which could have been used to improve the targeting of drone strikes (3). Thousands of employees did not agree when the company used its technology for military purposes. Whittaker led many of these employee protests against Project Maven.

Google isn’t the only company that faced criticism about their uses for AI. Tesla, Uber, Facebook, and many more companies were the center of controversial scandals concerning AI. Many AI researchers agree that guidelines must be set. That’s why the AI Now Institute was created. The institute focuses primarily on the following four parts of AI ethics: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. AI has been used to make predictions about criminal justice, housing, education, and law enforcement. Because of the lack of official guidelines, AI predictions regarding human liberties can violate basic human rights. Machines and AI have been rapidly changing the economy with their abilities to perform the same tasks as humans. Many scientists, manufacturing companies, and others have benefitted from automated labor, but many have experienced serious drawbacks of automation. AI systems learn based on the data sets they are presented with. These sets of data are usually biased and aren’t inclusive.

There are massive risks posed by the error in AI implemented in hospitals and power grids. This has given rise to problems that are being debated by scientists, mathematicians, and social researchers. Meredith Whittaker took the step that many need to take. AI is improving everyday, but in lockstep with these improvements are the dangers that AI poses. To prevent calamity, it is necessary that stricter rules and regulations are implemented.

~ Edited by Christian Mueth

References:

  1. https://ainowinstitute.org/
  2. https://kennisopenbaarbestuur.nl/media/257225/ai_now_2018_report.pdf
  3. https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
  4. https://gulfnews.com/business/in-house-protester-says-goodbye-to-google-1.65265439

This article was originally published at the Teach-Technology Organization, Inc. online technology blog. I volunteer as a tech blog writer at this organization, which is dedicated to bridging the gap between seniors and technology. You can read this article (and many more) at the Teach Technology site.

tags: TeachTech - technology