indicate strategies for moving from general ethical concepts and principles to more specific substantive content and ultimately to operationalizing those concepts.provide detailed discussion of two centrally important and interconnected ethical concepts, justice and transparency and.demonstrate the importance and complexity of moving from general ethical concepts and principles to action-guiding substantive content.The primary objectives of this report are to: The next step in moving from general principles to impacts is to clearly and concretely articulate what justice, privacy, autonomy, transparency, and explainability actually involve and require in particular contexts. Without this, adoption of broad commitments and principles amounts to little more than platitudes and “ethics washing.” The ethically problematic development and use of AI and big data will continue, and industry will be seen by policy makers, employees, consumers, clients, and the public as failing to make good on its own stated commitments. The harder work is moving from values, concepts, and principles to substantive, practical commitments that are action-guiding and measurable. However, articulating values, ethical concepts, and general principles is only the first step-and in many ways the easiest one-in addressing AI and data ethics challenges. In response, many companies, nongovernmental organizations, and governmental entities have adopted AI or data ethics frameworks and principles meant to demonstrate a commitment to addressing the challenges posed by AI and, crucially, guide organizational efforts to develop and implement AI in socially and ethically responsible ways. Organizations are increasingly expected to address these and other ethical issues.
There is widespread awareness among researchers, companies, policy makers, and the public that the use of artificial intelligence (AI) and big data raises challenges involving justice, privacy, autonomy, transparency, and accountability.