Dr. Alvin Marcelo of AeHIN begins his talk with the AeHIN GAPS Framework
Bangkok, Thailand–The Asia eHealth Information Network (AeHIN) participated in the PMAC 2025 plenary session “Equity and Accountability in Digital Health and Artificial Intelligence (AI): Addressing Risks, Digital Health Foundation GAPS, and Advancing Open and Local Solutions” on February 1, 2025, at World Ballroom A of Centara Grand & Bangkok Convention Centre. The panel allowed discussions on the foundational resources needed for digitization and AI, with the panelists bringing in feminist, donor, and implementation perspectives.
Dr. Smisha Agarwal from the Johns Hopkins Bloomberg School of Public Health moderated the session. The panelists were Dr. Alvin Marcelo from AeHIN, Lucy Kombe from the Zamara Foundation, Dr. Leo Celi from the Massachusetts Institute of Technology (AeHIN), and Marelize Gorgens from the World Bank.
Digital Health Equity and Accountability and Good Governance
Dr. Alvin Marcelo talked about the gaps in government-led digital health transformation in the Philippine context and its connection to AeHIN’s work. He began by saying that “the greatest opportunity or risk for fostering an enabling environment for equity and accountability in the digital health and AI systems that we’re building lies in good governance.”
In 2012, AeHIN investigated why eHealth systems were not working well and, after 10 years, realized that it was all about the foundations mentioned in the AeHIN Mind the GAPS framework: governance, architecture, people and program management, standards, and interoperability. This framework explains the need to ensure decisions are made by the right people, including the blueprint to be built together, having competent people build what’s on the blueprint, and the standards or building blocks needed to develop interoperable systems.
While in the topic of equity, Dr. Marcelo shared the 10 commandments of Ethical Medical AI and the seven sins of medical AI, and encouraged the audience to consider adopting the Health Data Governance (HDG) Principles to prioritize equity in three ways: 1) ensure we are protecting the people in whatever we do, develop, and consume, 2) ensure we are promoting health value, and not economic value as the main goal, and 3) ensure that the vulnerable and marginalized are not left behind.
He also presented the AeHIN GAPS 2.0 Call to Action for countries, ensuring that the governments are set up to maintain a safe environment for digital health. This call to action expands the Mind the GAPS framework by giving countries actionable steps to plan and implement their national digital health strategies.
Equity and Accountability in Digital Health in Kenya through a feminist lens
Lucy Kombe presented the evolving landscape of digital health and AI in Kenya, highlighting the need for equity and accountability to ensure these technologies benefit all.
At present, 43% of Kenyans still do not have internet access, limiting their access to digital health tools. There is a limited awareness of digital rights, such as data protection, which could be exploited by third parties. There is also a data gap wherein solutions from other countries are imported, failing to address the real needs and realities of women in Kenya. Aside from these challenges, Kenya has also made progress.
They have local and open solutions such as M-TIBA, which people can use to save money and pay for their health payments and insurance. They also have community-based initiatives and increase women’s involvement in tech development to ensure inclusivity and address the needs of all genders.
Zamara Foundation challenges digital health systems that ignore women’s health processes, priorities, and lived experiences, exclude digital leadership, and prioritize speed over safety. They work to push for women’s needs, respond to women’s needs, and address the underrepresentation of women in tech industries. Kombe called on the audience to advocate for women’s digital rights, support women-led tech projects, and educate girls and women to use and shape digital tools.
Equity in Data and Models
Dr. Leo Celi discussed equity in data and models that address the human problem. He wanted the audience to remember that “AI requires a redesign of our systems for education, innovation, and regulation.” People cannot use technology to fix broken systems. AI could be used as a catalyst to push for change, but AI itself is not the solution. “We do not have an AI problem; we have a people problem. We need to fix the human problem,” he said.
Dr. Celi also stressed that AI is trained on historical data reflecting inequalities and a knowledge hierarchy. This makes AI inequitable by design, and dumping this data into AI would enter these problems into the system. He added his misgivings about machine learning and the race to build models without knowing how the data were collected and by whom.
He mentioned that ChatGPT is the fastest technology adopted in history, reaching 1 million users in five days and 100 million in two months. Before closing his talk, he shared their team’s study, Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study, which describes the problem with how AI is currently developed. They entered the same paragraphs and changed the patient’s gender and race each time. The results they got were not an accurate representation of the prevalence of diseases, and social biases were augmented.
Donor Perspective on Accountability and AI
Marelize Gorgens talked about fairness, transparency, and responsibility in AI related to their work on providing development assistance to governments to ensure responsible investment in health and AI.
She mentioned the need for transparency in the evidence provided and how it was collected when approving AI-enabled medical devices. She also talked about post-market surveillance, which involves surveying, tracking, and monitoring the solutions AI provides after clearance for them is granted. However, this also requires access to good data.
While countries may have different regulatory imperatives, governments should explore and understand AI governance. Gorgens proposed that the health sector consider governance of, for, with, and by AI, as well as other parts of the health sector that might need to change because of AI, and the need to keep humans in the loop.
Lastly, Gorgens showed the dimensions for AI governance and the fundamental things (digital infrastructure, human capital, and local ecosystems) that should be in place with rightful, useful, and productive AI in society.
Related: