"Our Mission is to Build on Theories of Learning and Instruction to Create Innovative Learning Environments that Maximize Learner Capacity to Achieve Learning Goals"

AI2 Research Lab Spring Feast

AI2 Research Lab Spring Feast 🔗

March 24, 2025

We gathered at Dr. Min Kyu Kim's home for our spring semester feast during the spring break on March 18th! It was a warm and lively gathering, not only with our lab members but also with other professors we know!

Everyone brought delicious dishes for a potluck meal, and we had fun conversations full of stories and laughter. One of the highlights of the evening was playing Yutnori, a traditional Korean board game that brought out everyone’s competitive spirit and added lots of fun to the night.

It was a wonderful time spent together, making us all feel even more connected as a lab family. We're already looking forward to the next one!

AI-ALOE's Spring Retreat

AI-ALOE's Spring Retreat 🔗

March 14, 2025

Our SMART team at AI-ALOE (Dr. Min Kyu Kim, Jinho Kim, and Yoojin Bae), along with other AI2RL members (Hyunkyu Han and Seora Kim), attended AI-ALOE's Spring Retreat on March 6-7 at the Coda Building, Georgia Tech. This year’s retreat saw the highest participation yet, with approximately 40 attendees joining either in person or virtually. During the meeting, our graduate associates Jinho and Yoojin had their presentations.

The first day focused on Design & Architecture in the morning, where Jinho Kim represented our team and presented SMART's design guidelines before engaging in breakout group discussions. In the afternoon, the focus shifted to Theory, Integration & Partnership, where Yoojin Bae shared insights on the confluence and integration of our teams, followed by further discussions.

On the second day, participants explored key topics including risk assessment, contingency planning, commoditization, and data sharing, with discussions divided into self-selected groups.

The retreat provided opportunities for deep discussions, knowledge exchange, and strategic planning.

For more information about the Annual Review Meeting, please visit the following link:

Presentations at the AI-ALOE Foundational and Use-Inspired AI Meetings

Presentations at the AI-ALOE Foundational and Use-Inspired AI Meetings 🔗

March 1, 2025

Our SMART team at AI-ALOE (Dr. Min Kyu Kim, Jinho Kim, and Yoojin Bae) presented at the biweekly Foundational and Use-Inspired AI meetings on February 24. During our presentation, we shared key aspects of SMART's design, starting with its overarching goal and our approach, which incorporates a whole-person perspective.

We highlighted the theoretical foundations that underpin SMART and discussed the design guidelines we developed, starting with an explanation of the structural framework we used. Additionally, we demonstrated how SMART guidelines fit within this framework by providing examples to illustrate their application.

Finally, we reflected on our experience preparing for this presentation, sharing insights gained throughout the process. We look forward to hearing from the other teams at AI-ALOE and learning about their design guidelines!

Lunch and Learn Talks

Lunch and Learn Talks 🔗

February 20, 2025

Our graduate research associates, YB and JK, presented at Georgia State University's Department of Learning Sciences Lunch and Learn on February 11th. The talk was held both in person and remotely via Webex, with approximately 20 attendees, including faculty members and graduate students from across the department.


A Study on AI-Augmented Concept Learning: Impact on Learner Perceptions and Outcomes in STEM Education.
Presented by Yoojin Bae 

This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.

Based on: Bae, Y., Kim, J., Davis, A., & Kim, M. K. (2024). A Study on AI-Augmented Concept Learning: Impact on Learner Perceptions and Outcomes in STEM Education. In Lindgren, R., Asino, T. I., Kyza, E. A., Looi, C. K., Keifert, D. T., & Suárez, E. (Eds.), Proceedings of the 18th International Conference of the Learning Sciences - ICLS 2024 (pp. 1450-1453). International Society of the Learning Sciences.

Education with Generative AI: Enhancing Simulation-based Nursing Education.
Presented by Jinho Kim

Simulation-based learning is a vital component of nursing education, but often faces challenges, due to reliance on human simulation operators, which limits scalability and authenticity. This study aims to address these challenges and enhance scenario-based simulation learning through the use of generative AI as a role-playing agent in nursing simulations. Using an existing simulation scenario from a bachelor’s-level nursing course and a design-based research approach, we iteratively developed an evaluation framework, and a simulation interaction question set to evaluate the performance of generative AI models. The findings highlight strengths, such as accurate role-based dialogue generation, and limitations, including challenges with common-sense reasoning. Recommendations for leveraging generative AI in simulation-based learning include refining the evaluation framework to articulate clearer distinctions between criteria, designing question sets that integrate in-context and out-of-context prompts to comprehensively evaluate AI’s ability to handle ambiguities, and refining scenarios through iterative feedback to ensure consistency and clarity