Skip to content

Files

Latest commit

1bf258c · Apr 28, 2021

History

History
61 lines (31 loc) · 3.14 KB

401read43.md

File metadata and controls

61 lines (31 loc) · 3.14 KB

Code of Ethics

Ethics in the workplace

From the article: Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance

When referencing Pichai we will be talking about: Google CEO Sundar Pichai

Google initally committed to not using artificial intelligence for weapons or surveillance after employees protested the company’s involvement in Project Maven.

This project was a goverment funded artificial intelligence program to analyze drone footage.

“How AI is developed and used will have a significant impact on society for many years to come,” Pichai wrote. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”

Google associates were concerned that Google would have involvement of building weapons with AI. Some even left the company because of this.

“While Google’s statement rejects building AI systems for information gathering and surveillance that violates internationally accepted norms, we are concerned about this qualification,” said Peter Asaro

How does this relate to ethics in the workplace?

Google is dangerously close to violating one of the first general rules of ethics in technology:

1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.

and

1.6 Respect privacy.

Project Mavuen goes against respecting people's privacy.

For now Google has said it does not plan to use this technology irresponsibly but ulitmately time will tell.

Ethics in Technology

From the article: The ethical dilemmas of self-driving cars

There is no denying self driving cars are becoming more of a normal thing.

"Nobody's talking about ethics," Ford Motor Co. chairman Bill Ford

It's near impossible to calculate all the millions of possibilities while a car is driving on the road.

Consider, he said (Toyota Canada president Larry Hutchinson), the ethical dilemma that an autonomously driven car would need to resolve in an instant when a child jumps suddenly into the car's path from the curb. There's no time to brake. What then?

With this being said, does it violate this Software Engineering code of ethics?

1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.

Could this exact situation that Larry Hutchinson spoke of be that violation?

But then it presents the question: Is any driver of a car safe? What decision would a human brain make in this scenario? Hard to tell...

Germany actually set out to answer some of these ethical questions in regards to self-driving cars.

As you can see in the infographic above, it turns out driverless vehicles on average can stop quicker than the average human.

Does this make a solid case for ethical driverless cars?

This topic ultimately falls under Principle 4, Judgement.