Organizer: Vrije Universiteit Brussels
This event will focus on privacy issues surrounding Artificial Intelligence. Enhancing efficiency, increasing safety, improving accuracy, and reducing negative externalities are just some of AI’s key benefits. However, AI also presents risks of opaque decision making, biased algorithms, security and safety vulnerabilities, and upending labor markets. In particular, AI and machine learning challenge traditional notions of privacy and data protection, including individual control, transparency, access, and data minimization. On content and social platforms, it can lead to narrowcasting, discrimination, and filter bubbles.
A group of industry leaders recently established a partnership to study and formulate best practices on AI technologies. Last year, the White House issued a report titled Preparing for the Future of Artificial Intelligence and announced a National Artificial Intelligence Research and Development Strategic Plan, laying out a strategic vision for federally funded AI research and development. These efforts seek to reconcile the tremendous opportunities that machine learning, human–machine teaming, automation, and algorithmic decision making promise in enhanced safety, efficiency gains, and improvements in quality of life, with the legal and ethical issues that these new capabilities present for democratic institutions, human autonomy, and the very fabric of our society.
Papers and Symposium discussion will address the following issues:
- Privacy values in design
- Algorithmic due process and accountability
- Fairness and equity in automated decision making
- Accountable machines
- Formalizing definitions of privacy fairness and equity
- Societal implications of autonomous experimentation
- Deploying machine learning and AI to enhance privacy
- Cybersafety and privacy
The workshop will take place within the Brussels Privacy Symposium at Vrije Universiteit Brussels. The CRC CROSSING at TU Darmstadt is supporting the workshop.