Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.
Ethical issues that arise with AI systems as objects, i.e., tools made and used by humans… includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10).
Müller, Vincent C., "Ethics of Artificial Intelligence and Robotics", The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/>.
20 Most Unethical Experiments in Psychology: https://www.onlinepsychologydegree.info/unethical-experiements-psychology/
Apply for an Ethical Approval:
IRB: The Office of Human Research Protection. Institutional Review Board Guidebook. "Chapter 3, Section A: Risk/Benefit Analysis." pp. 1-10 [1] Retrieved May 30, 2012
Informed Consent: https://researchsupport.admin.ox.ac.uk/governance/ethics/resources/consent
CITI Training:
https://concordia.csp.edu/irbpublic/human-subjects-review/citi-training/
Institution for Ethics in AI Oxford:
Montreal AI Ethics Institute
The Alan Turing Institute:
Stanford University Human-Centered Artificial Intelligence:
AI4Good: