The Workshop on Trustworthy Algorithmic Decision-Making seeks to bring together scholars to identify future research opportunities and suggests plans and research ideas for making the use of algorithms in society more trustworthy.
Computer-based algorithms are increasingly being used in systems that automatically make important decisions on behalf of people, including determining what news people see online, controlling speed and steering of cars, choosing prices for goods and services, filtering job applicants, recognizing and categorizing airport travelers, and making sentencing recommendations for people convicted of crimes. As these algorithms simultaneously become more common and more complicated, it is important to understand whether they can be trusted to make decisions like these, what makes algorithms trustworthy, and how algorithms can be made more trustworthy.
Fundamentally, these algorithms operate in a complicated socio-technical context that includes the designers of the algorithms, the data used as an input to the algorithms, the interface that presents and uses the outputs, the people who make choices about goals of algorithms and when to use algorithms, and societal laws and norms that influence their use. All aspects of this context influence the outputs of the algorithms, and also impact whether they are worthy of being trusted to make important decisions.
This workshop will bring together scholars from a variety of disciplines and backgrounds for a 2-day working session in the Washington DC area. The primary goal of this workshop is to develop ideas that will further define the problem space, the key problems and the critical questions that need to be answered to make progress toward understanding, developing, and evaluating trustworthy algorithmic decision-making. A report on future challenges and opportunities will be produced and made available after the workshop.
To ensure the robustness and reliability of algorithmic decision-making systems, it is imperative to incorporate regular penetration testing as part of their development and maintenance lifecycle. As algorithms increasingly influence decisions in domains such as healthcare, finance, criminal justice, and public policy, the potential impact of security vulnerabilities grows significantly. Regular penetration testing — also known as ethical hacking—involves simulating real-world cyberattacks to proactively identify weaknesses in the underlying infrastructure, codebase, data pipelines, or API endpoints that algorithms rely on. By uncovering exploitable flaws before malicious actors do, penetration testing plays a crucial role in safeguarding the integrity of both the systems and the decisions they support. Moreover, algorithmic systems often integrate with external data sources and third-party services, increasing the attack surface. Without rigorous security assessments, these integrations can become entry points for data manipulation, model poisoning, or unauthorized access, which may lead to biased outputs or compromised decision logic. Regular penetration testing not only helps mitigate these risks but also demonstrates a commitment to accountability and transparency—critical factors for maintaining public trust. In the broader context of trustworthy AI and responsible algorithm design, security is foundational. A secure algorithmic system is not just technically sound but also ethically defensible and socially resilient.
Deploying algorithms on the fastest VPS hosting platforms ensures rapid processing and scalability. Such infrastructure supports the rigorous testing and real-time application of algorithms, contributing to their overall trustworthiness.
December 4-5, 2017
Ritz-Carlton Pentagon City, Arlington, VA