Research: Building AI We Can Trust
Picture a world where a self-driving shuttle navigates rush-hour traffic with your children in the back seat. Where AI helps doctors make split-second decisions in the emergency room and performs complex surgeries with precision. Where algorithms determine where and how to get clean energy and its massive connected infrastructure without harming the environment. In each scenario, the stakes couldn’t be higher, and trust becomes everything.
As a researcher and a collaborator, I am working to ensure that as AI systems grow more powerful, they also grow more trustworthy and deployed for public good. My academic and personal journey from South Sulawesi, Indonesia (where Nickel deposits sit on our backyards) to Stanford (where exciting technologies are being crafted into products) has given me a unique perspective: technological innovation means nothing if it does not serve humanity equitably and safely.
We are at an inflection point. AI systems are making decisions that affect millions of lives, yet we are still figuring out how to ensure they are safe, fair, and aligned with human values. Traditional approaches to safety (waiting for failures to happen and then fixing them) simply will not work when a single mistake could be catastrophic.
Consider autonomous vehicles. To prove they are safe using conventional testing, we would need them to drive billions of miles, taking decades and astronomical costs. By then, the technology would be obsolete. We need a fundamentally different approach, one that can predict and prevent failures before they occur, validate safety without endless real-world testing, and ensure benefits reach everyone, not just the privileged few.
A New Paradigm for AI Safety
My research contributes to a radical shift in how we think about AI safety. Instead of treating it as an afterthought, I have developed mathematical frameworks that build trustworthiness into AI systems from the ground up. My Deep Importance Sampling technique can find potential failures 1000x faster than conventional methods. Imagine being able to simulate a lifetime of driving scenarios in just weeks, not years.
But safety is not just about preventing crashes. It is about ensuring AI systems know their own limitations. Through my work on adaptive meta-learning, I have created a framework to let machine learning systems to recognize when they are facing situations beyond their training, like giving them a sixth sense for uncertainty. This approach has earned recognition as a CPS Rising Star 2024 award, but more importantly, it is closing the loop on safety validation and development.
The magic happens when we combine rigorous mathematics with real-world wisdom. In developing AI-assisted ventilator control, we did not just optimize for medical outcomes, we designed systems that respect the irreplaceable value of human medical expertise. In optimizing geothermal wells, we integrate geology, geophysics, and reservoir engineering experts to formulate the reward for the AI to optimize. These AIs provide insights and suggestions, but always defers to the human experts’ for value alignment and judgment in design and during critical moments.
Is it enough? Unfortunately, no.
Beyond Safety through Validation: AI for Planetary Sustainability
I have realized that trustworthy AI is not just about preventing harm, it is about actively creating good. The climate crisis demands rapid action, but our solutions often create new problems. Electric vehicles need lithium; data centers and grids need copper, and renewable energy requires critical and rare earth elements. Without careful planning, the green revolution could devastate the very communities it is meant to help (or those in lands faraway whose voice are often muted).
This is where AI becomes a force for justice. My work with Mineral-X develops AI systems that see the full picture, not just where resources are, but how extraction affects local communities, ecosystems, and global supply chains.
Our lithium supply chain model does not just optimize for efficiency; it optimizes for resilience, ensuring countries are not held hostage by unstable suppliers. Our sustainable mining and exploration models echo the voice of the communities struggling to fight for their native lands and livelihoods. Our AI agents look for ways for a more waste-minimizing mineral extraction processing to minimize waste from mining and processing that might be harmful to the environment and families nearby.
The Human Element
Throughout my research, one principle guides everything: technology should amplify human capabilities, not replace human judgment. This philosophy shapes how I approach the most challenging applications, from transportation and supply chains, to energy transition and climate change. I work closely with engineers, geologists, business leaders, and policymakers to ensure that the technology we build is not only safe and effective, but also equitable and sustainable. All is validated with the impacted community and local context in mind.
The rising agentic AI (systems that can autonomously plan, reason, and act across complex environments) presents us with a critical dilemma. We face two equally dangerous extremes: the Silicon Valley “move fast and break things” mentality applied to systems capable of breaking entire supply chains or financial markets, and the paralyzing pessimism that delays beneficial deployments while communities facing climate disasters desperately need AI assistance. Blind enthusiasm risks creating systems that optimize for narrow metrics while inadvertently undermining human agency or creating irreversible dependencies. Yet excessive caution perpetuates the very problems these systems could solve, ensuring transformative technologies remain concentrated among those who already have power. We need principled urgency, deploying agentic AI where it can do the most good while ensuring these systems amplify rather than replace human judgment, always with rigorous validation and community oversight. Designing a good product that uses agentic AI is not the end goal, but a means to scale up the impact and our ability to align with the values of the local community it serves.
This human-centered approach extends to how I conduct research itself. All my tools are open-source because I believe the best solutions emerge from diverse perspectives. My collaborations span from Silicon Valley’s cutting-edge labs to universities in Southeast Asia, bringing together voices that are too often excluded from AI development.
The Path Forward
Looking ahead, I see three critical frontiers for trustworthy AI.
- First, we must extend safety guarantees throughout an AI system’s entire lifecycle, not just during initial deployment but through years of operation in changing environments.
- Second, we need to democratize AI safety, ensuring that every country and community can develop and deploy AI systems that reflect their values and serve their needs.
- Third, we must tackle planetary-scale coordination problems, from climate change to resource allocation, that no single nation can solve alone.
These are not just technical challenges, they are fundamentally human ones that I believe we can solve together. They require not just better algorithms but better institutions, not just mathematical proofs but moral clarity, not just innovation but wisdom. In no way am I feeling sufficient to solve these problems alone. Whether you are a student dreaming of a better future, a policymaker grappling with AI governance, an industry leader seeking trustworthy solutions, or simply someone who cares about the world we are building, I invite you to join this mission. Together, we can ensure that the AI revolution enhances rather than replaces human agency, addresses rather than exacerbates inequality, and safeguards rather than threatens our planet.
The future is not something that happens to us, it is something we build now. And with trustworthy AI as our tool, we can build the very future worthy of our highest aspirations.
Contact me to explore collaborations or research opportunities!
Page last updated: 2025-07-04 – grammar and tone edited by AI