• Login
  • Register

Work for a Member organization and need a Member Portal account? Register here with your official email address.

Event

AI & Robotics Hackathon @ Bangkok 2025

Copyright

Image by gt39 from Pixabay

Image by gt39 from Pixabay

Wednesday — Friday
December 17, 2025 —
December 19, 2025

AI & Robotics Hackathon by the MIT Media Lab @ Bangkok 2025: Building a Better Future for All

Sponsored by Bangkok Bank

-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-

How might emerging technologies shape our everyday lives while also creating new employment opportunities for local communities?

This December, join us for a dynamic hackathon exploring the intersection of AI, robotics, and sensor technologies with real-world challenges. Participants will collaborate to design innovative solutions that enhance daily life and foster meaningful employment across diverse communities.

Explore AI, Robotics, and Sensor Tech sub-themes like:

  1. City – Smarter urban planning, housing equity, mobility, and community decision-making.
  2. Education – Inclusive learning for all ages and geographies.
  3. Health – Intelligent systems for wellbeing, preventive care, and rehabilitation.

Qualifications for the applicants:

  • Graduate level and above are preferred. However, we welcome applications from candidates with unique backgrounds and diverse experience, including those with a Bachelor’s degree and professional experience.

Award

  • The winning team will be awarded a grand prize of THB 300,000, while the first runner-up and second runner-up will receive THB 150,000 and THB 75,000, respectively.

    Additional Support

    For selected international contestants, the sponsor will cover regional flights and accommodation expenses.

Copyright

Bangkok Bank

EVENT DETAILS

Agenda

 Wednesday, December 17, 2025

Location: Cloud11

  • 9:00am - 9:15am: Welcome & Overview 
  • 9:15am - 9:30am: Overview of the ML
  • 9:30am - 11:45am: Short Talks by Media Lab Participants (15 min each x 8 students)
  • 11:45am - 12:00pm: Team Grouping, Introduction, and Ice-Breaking
  • 12:00pm - 1:00pm: Lunch - More Detailed Team Intro and Team Building 
  • 1:00pm: Hacking Begins 

Thursday, December 18, 2025 

Location: Cloud11

  • Morning: Hacking Continues
  • 12:00pm - 1:00pm: Lunch - Mid-term Project Updates From Each Group
  • 1:00pm: Hacking Continues

Friday, December 19, 2025

Location: Cloud11

  • Morning: Hacking Continues 
  • 12:00pm - 1:00pm: Lunch 
  • 1:00pm: Hacking Continues
  • 3:00pm - 5:00pm: Project Result Sharing and Presentations 
  • 5:00pm - 7:00pm: Award Ceremony 

Media Lab Team

Copyright

Bangkok Cohort 2025

Joe Paradiso, Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab 

Speciality: Wireless sensing systems, wearable and body sensor networks, smart buildings, environmental sensing systems, energy harvesting, power management, ubiquitous/pervasive computing and the Internet of Things, human-computer interfaces, space-based systems, smart materials, e-textiles, digital twins in virtual worlds, electronic music controllers, electronic music systems, interactive music/media, Human Augmentation.

Talk Title:  New Modalities for Sensing, Interaction, and Human Experience

Description:  We are living in an era driven by ubiquitous sensing.  The visions that many of us touted in the early days of ubiquitous/pervasive computing have largely come to pass in this age of IoT, and now sensors of all kinds are embedded in smart devices across our environments that draw very little power and connect seamlessly to widespread networking infrastructure that becomes ever smarter and more responsive.  Where do we go next?  The crux of much of this will be in how this information connects to people, and how our perception, cognition, and identity effectively expand beyond our corporeal confines.  This talk will explore this viewed through the lens of recent projects happening in my Responsive Environments research group that involve new platforms for sensing at various scales in the physical world (wearables, smart buildings, connected landscapes, and space missions) and how this information connects to people in different ways, from manifesting sensed or inferred phenomena in virtual analog environments to interfaces modulated by user attention and focus or augmented by real-time AI. 

Cedric Honnet, PhD Candidate, MIT Media Lab (Responsive Environments)

Specialty: HCI (Human-Computer Interaction), Embedded Systems, Sensing, Wearables, e-Textiles, Miniaturization, Manufacturing 

Talk Title:  FiberCircuits: A Miniaturization Framework to Manufacture Fibers That Embed Integrated Circuits

Description:  While electronics miniaturization has propelled the evolution of technology from desktops to compact wearables, most devices are still rigid and bulky, often leading to abandonment. To enable interfaces that can truly disappear and seamlessly integrate into daily life, the next evolutionary leap will require further miniaturization to achieve full conformability. With FiberCircuits, we offer design and fabrication guidelines for the manufacturing of high-density circuits that are thin enough for full encapsulation within fibers. Our demonstrations include a 1.4 mm-wide ARM microcontroller with sensors as small as 0.9 mm-wide and arrays of 1 mm-wide addressable LEDs, which were woven into our interactive textiles. We provide example applications from fitness to VR, and propose a scalable fabrication process to enable large-scale deployment. To accelerate future research in HCI, we also made our platform Arduino-compatible, created custom libraries, and open-sourced all the materials. Finally, our technical characterizations demonstrate FiberCircuits' durability, thanks to its silicone encapsulation for waterproofness and braiding for robustness. From wearables to insertables or even implantables, we believe that by making miniature circuits accessible to researchers and beyond, FiberCircuits will open possibilities for new scalable interfaces that embody imperceptible computing.

Michael Fernandez, PhD Candidate, MIT Media Lab (Biomechatronics)

Speciality: Prosthetics and orthotics, machine learning, biomechanics, robotics

Talk Title: Cyborg Design: Toward Functional Limb Restoration with Neural Interfaces

Description: Robotics and machine learning have evolved from rigid, preprogrammed systems into adaptive technologies that learn, predict, and interface directly with the human body. This talk explores advances in restoring upper-limb function after amputation through neuromusculoskeletal interfaces and biomimetic control. The Agonist–Antagonist Myoneural Interface (AMI) and Cutaneous Mechanoneural Interface (CMI) preserve natural proprioceptive and tactile signaling, while direct neural controllers project user intentions into intuitive prosthetic movement. Together, these innovations shift prosthetic devices from mechanical substitutes to natural extensions of the body to restore, and ultimately surpass, human function.


Kristen M. Edwards, Phd Candidate, MIT

Speciality: Artificial Intelligence, Machine Learning, Engineering Design, Manufacturing

Talk Title: Multimodal AI in Design Evaluation: Statistical Perspectives on Reaching Expert-Level Equivalence

Description:  The subjective evaluation of early-stage engineering designs, such as conceptual sketches, has traditionally relied on human experts. However, expert evaluations are time-consuming, expensive, and sometimes inconsistent. Recent advances in vision-language models (VLMs) offer the potential to automate design assessments, but it is crucial to ensure that these AI “judges” perform on par with human experts. This research introduces a rigorous statistical framework to evaluate whether an AI judge's ratings align with those of human experts. Our results show that, with in-context learning and reasoning, VLMs often achieve better agreement with experts than trained novices do. Moreover, for some design metrics, reasoning-enabled VLMs can even achieve lower mean absolute error with experts’ ratings than experts do with each other. These findings suggest that, on certain statistical tests, AI judges are not only approaching expert-expert equivalence but in some cases surpassing it.

Annika Thomas, PhD Candidate, MIT MechE

Specialty: Spatial AI, Robotic Perception, Autonomy in Space Sub-topic: #Robotics, #AI

Talk Title + Description: TBD

Valdemar Danry, PhD Candidate, MIT Media Lab (Fluid Interfaces)

Specialty: #AI #Cognition #Human-AI-Interaction #Reasoning #WearableAI #CriticalThinking

Talk Title: AI as a Cognitive Copilot: Designing Tools that Make Us Smarter

Description:  AI systems are becoming our daily companions, answering questions, shaping choices, and in subtle ways taking over our thinking. Left unchecked, they risk making us passive, dependent, and less able to reason for ourselves. But what if these same technologies could do the opposite - help us become sharper thinkers, more curious learners, and wiser decision-makers? In this talk, Valdemar Danry (MIT Media Lab) presents large-scale studies revealing where today’s AI can undermine us as well as new types of chat interfaces, wearables, and real-time signals that help people reason more clearly, question more deeply, and learn faster.

Michelle Kim, MS Candidate, MIT Media Lab (Fluid Interfaces)

Specialty:  HCI (Human-Computer Interaction),  Multimodal AI, Health and Wellness, Behavior Change, Affective Computing, Wearables, Explainable AI (XAI)

Talk Title: Meta-Self: Explainable Wearable AI Systems for  Supporting Metacognition and Self-Regulation

Description:  In a world where wearable AI systems can sense our bodily signals, emotions, and guide our behaviors, how can we design systems that instead cultivate our ability to guide ourselves? Sharing insights from prior in-the-wild user studies, this talk motivates a new class of technology that preserves human agency while helping people develop the capacity to examine their internal processes—and, in doing so, build skills to manage and regulate their body, emotions, and behavior to purposefully achieve desired outcomes. 

Lucy Zhao, MS Candidate, Media Lab (Multisensory Intelligence), MIT EECS

Specialty: Large Language Models, Large Multimodal Models, Mechanistic Interpretability, Reasoning

Talk Title: New Advances in Multisensory Intelligence

Description: The Multisensory Intelligence research group studies the foundations of multisensory artificial intelligence to create human-AI symbiosis across scales and sensory mediums. The new AI technologies we develop are able to learn and interact with the world through integrating diverse sensory channels, significantly increasing their capability and flexibility. Our group further draws upon multidisciplinary backgrounds to integrate the theory and practice of AI into many aspects of the human experience, including enhancing our digital productivity, developing new technologies for creative expression, and improving our holistic health and wellbeing. Finally, our group carefully considers the responsible deployment of AI technologies, including quantifying and mitigating real-world societal concerns around bias, fairness, and privacy, participatory co-design with stakeholders, and developing policies around the real-world deployment of AI foundation models.

Lucy Li, PhD Candidate, MIT Media Lab (Tangible Media)

Specialty:  AI, Sociable Machines, Tangible Interfaces, Interaction Design, Learning, Remote Collaboration

Talk Title: Co-Ideation Across Time: Revitalizing Legacy Design Sketchnotes with Conversational AI Agents to Foster Intergenerational Collaboration

Description : While legacy sketchnotes capture rich design rationales and inspirations, they are rarely reused in contemporary practice. We present Co-Ideation Across Time, utilizing Large Language Models (LLMs) to transform decades-old design sketchnotes into interactive “AI-augmented knowledge objects”. Our system digitizes over 2,000 pages of alumni sketchnotes and connects them with conversational agents trained on corresponding theses and publications, enabling current and future students to engage in multimodal dialogue with past ideas and researchers. A user study with 12 participants showed that interacting with the system stimulated deeper understanding for abstract concepts, idea diversity, and fostered a stronger sense of continuity with the community’s legacy. Our contributions are threefold: (1) a method for integrating design legacies with LLM-driven conversational agents; (2) an empirical study demonstrating how this approach supports learning and intergenerational knowledge sharing; and (3) a conceptual framing of knowledge objects as active participants in design ideation. 

Inquiry:
Media Lab-Bangkok Bank Event Organizers (hack-at-bkk@media.mit.edu)
Mirei Rioux, MIT Media Lab (mirei@media.mit.edu)

Also interested in AI & Robotics Competition @Bangkok 2025?

More Events