Current Projects

Accelerating Skill Acquisition in Complex Psychomotor Tasks via an Intelligent Extended Reality Tutoring System

Mohsen Moghaddam (PI), Kemi Jona (Co-PI), Casper Harteveld (Co-PI), Mehmet Kosa (Co-PI)

$849,584.00, 2023-2026
Manufacturing, medical laboratory, construction, and many other jobs require workers to learn complex physical "psychomotor" tasks that combine both perceptual and motor skills. These are often taught using an apprenticeship model on real jobsites, which raises both productivity and safety risks for workers. Further, relatively little is known about how to assess trainees? skill levels in these tasks and to adapt training practices based on those assessments. This project tackles these problems by developing a new generation of intelligent tutoring systems that combine extended reality (XR), artificial intelligence (AI) and Internet-of-things (IoT) technologies to support training and assessment of complex skills required by modern, highly automated manufacturing facilities. The high level idea is that new sources of data captured by XR headsets, wearable devices, cameras, and IoT sensors can be used to build models of psychomotor skill development and new methods for providing personalized, just-in-time coaching guidance. Through partnerships with manufacturing consulting firms, local community colleges, and K-12 schools, the project will enhance the skill development of a diverse population of learners and professionals and expand interest in advanced manufacturing careers. 
The project team brings together expertise in engineering, cognitive psychology, learning sciences, game design, and XR, to make fundamental contributions to both learning science and learning technologies around just-in-time, personalized, context-aware provision of learning scaffolds for manufacturing workers learning new skills. On the learning side, the project team will examine the stages of expertise development for specific psychomotor tasks, and the effectiveness of adaptive interventions on learners? engagement, performance gains, and accuracy. A virtual reality (VR) game in an advanced manufacturing scenario will be used to collect ecologically valid baseline data and prepare more novice learners for real-world task performance. On the technology side, the project team will build and validate an intelligent XR tutoring system to accelerate the learning of psychomotor tasks with high complexity that arises from task structures and human information processing requirements. The innovative aspects of the technology include data-driven activity understanding (e.g., task step identification and error detection) and user modeling (e.g., cognitive load detection), through novel multimodal AI architectures designed to process and fuse data captured from augmented reality (AR) headsets, wearables that capture physiological data, cameras, IoT sensors, and manufacturing machines. Both learning and technology innovations will be validated through extensive laboratory studies; together, the work will lead to an intelligent feedback algorithm to dynamically adapt the nature, frequency, and depth of feedback to the expertise of the learner to facilitate optimal learning and speed-to-competence.
We developed AR and VR systems for training in cold spray additive manufacturing and robotic painting, using Unity to provide task guidance through interactive dialog boxes. These systems differ in immersion levels and tool usage, leveraging multimodal data such as gaze and hand tracking to enhance visualization and interaction. Open-source documentation and files are available at xert.co, supporting adoption and customization. Designed to improve psychomotor skill learning, the systems enable data-driven cognitive load estimation, struggle detection, expertise estimation, and constructive design friction, showcasing AR and VR’s transformative potential for industrial training in complex manufacturing environments.
This study explores the potential of augmented reality (AR) for adaptive, personalized on-the-job training by investigating the relationship between gaze behavior, visual attention, and expertise. Using eye tracking and computer vision, the research examines how novices and experts differ in fixation/saccade durations, attention allocation to action-relevant areas of interest (AOIs), scanpath transitions, and pupil size variations during AR-guided procedural tasks. By analyzing gaze, pupillometry, and egocentric video data in two tasks, the findings reveal how expertise influences visual attention and gaze interactions. This work advances personalized AR interventions, enhancing training and assistance in industrial applications.
Blank Space ()
(text and background only visible when logged in)

FW-HTF-R: Fostering Learning and Adaptability of Future Manufacturing Workers with Intelligent Extended Reality (IXR)

Mohsen Moghaddam (PI), Stacy Marsella (Co-PI), Kemi Jona (Co-PI), Alicia Modestino (Co-PI), Nicholas Wilson (Co-PI)

$2,000,000.00, 2021-2025
This research project imagines the future of work in precision manufacturing where the spatial and causal reasoning and decision-making abilities of workers are augmented through teaming with intelligent extended reality (IXR) technologies. Evidence suggests that the newer wave of automation in manufacturing is not so much to replace workers but rather to complement human work to increase precision, safety, and product quality. Yet, U.S. manufacturers are not adequately addressing the changing nature of skill requirements which is anticipated to leave 2.4 million manufacturing jobs unfilled by 2030. This project will address the urgent need for breakthrough technologies that enable workplace-based learning and rapid upskilling of the manufacturing workforce on complex, cognitively demanding, and hard-to-automate tasks. The project will focus on precision machining and inspection in the aviation industry as the specific work context for building and validating the IXR technologies, which is also expected to inform the technology development in other industries such as medical, automotive, semiconductor, and defense. The convergent research team will create new technological pathways to enable intelligent worker-XR teaming and advance the fundamental understanding of its impacts on labor economy and worker learning and innovation. This project aims to create new perspectives, methods, and discoveries to unleash the full potential of America's manufacturing workforce, and as such, strengthen national prosperity and economic competitiveness in precision manufacturing. 
This project brings together several disciplines, including engineering, learning sciences, social sciences, economics, computer sciences, psychology, and workforce development. The investigator team is structured to achieve multiple convergent goals across the three dimensions of the Future of Work at the Human-Technology Frontier: (1) The Future Work dimension will investigate the changes in employer skill requirements for precision manufacturing including education, years of experience, and actual skills, using a proprietary database of 160 million online job vacancies. Expert interviews, firm-level surveys, and in-depth case studies will investigate training and upskilling practices for incumbent and entry-level workers, explore accessibility of the IXR approach for certain groups of workers and firms, and identify economic barriers and opportunities for adopting the IXR technology. (2) The Future Technology dimension will advance the fundamental understanding of how new sources of multimodal data captured by XR devices, digital thread, IoT, and cloud-based analytics can be harnessed to interpret, predict, and guide the behavior of precision manufacturing workers. A novel IXR technology will be built and validated that adapts the scientific methods of computer vision, natural language understanding, and inference engines to provide intuitive and personalized assistance to workers performing complex reasoning and problem-solving tasks. (3) The Future Worker dimension will generate new knowledge about the affordances of worker-XR teaming to support the development of workers? adaptive expertise for increasingly complex manufacturing tasks, building on research from the learning sciences that examines cognitive processes associated with complex reasoning and problem solving. It is expected that the knowledge generated in this project will elicit new pathways for the design of future collaborative human-technology systems for training adult workers beyond XR. This project has been funded by the Future of Work at the Human-Technology Frontier cross-directorate program to promote a deeper basic understanding of the interdependent human-technology partnership in work contexts by advancing the design of intelligent work technologies that operate in harmony with human workers.
This research develops a real-time inference engine to detect task steps and errors by integrating hand action recognition with IoT and IMU data. Multimodal inputs, including RGB-D, IMU, and smart device data, are combined with hand action inferences. A rule-based system powered by task graph monitors step completion criteria to track progress and identify the current step in AR-guided tasks. If a step is skipped, an intelligent AR system provides corrective guidance, improving task accuracy. The approach ensures seamless task progression, leveraging AR for error prevention and support in completing tasks effectively. This innovation enhances human performance in procedural workflows with real-time feedback and assistance.
This research presents an AR system that enhances intent communication between humans and collaborative robots during complex assembly tasks. Using a Unity-ROS pipeline, the system integrates multimodal input and output for dynamic human-robot interaction. User studies evaluated its impact on task performance and satisfaction, comparing AR-enabled scenarios with traditional methods. Results showed AR significantly improved communication clarity, reduced errors, and increased task efficiency. Immediate visual feedback minimized task completion times and errors while boosting user confidence and trust. These findings emphasize the critical role of transparent robot actions in achieving seamless collaboration.
Understanding user actions from egocentric videos is vital for intelligent augmented reality (AR) systems. This paper presents a pipeline for egocentric hand action recognition tailored for AR applications. Leveraging an AR-guided data collection method, it eliminates manual annotation and introduces a skeleton-based model optimized for real-time AR scenarios. A case study on industrial precision inspection tasks validated the approach, generating a rich dataset and refining features for training. Extensive evaluations, including offline and real-time tests, demonstrated the model’s robustness and practicality, highlighting its potential for enhancing user interaction and system adaptation across diverse AR use cases. 
This study explores challenges in human-machine interfaces (HMIs) for industrial machines, emphasizing usability issues caused by data overload and misalignment between HMIs and machine operations. To address these issues, an augmented reality (AR) interface integrated with IoT is proposed, enabling spatial and temporal alignment of data and controls with machines. Developed using a Kepware-ThingWorx-Vuforia pipeline and tested in a cyber-physical factory for cellphone assembly, the system was evaluated against Siemens HMIs using a HoloLens 2. A user study with 20 participants revealed that the AR interface significantly improved efficiency, task completion time, error rates, usability, and reduced mental load.

Completed Projects

CR: From User Reviews to User-Centered Generative Design: Automated Methods for Augmented Designer Performance

Mohsen Moghaddam (PI), Tucker Marion (Co-PI), Paolo Ciuccarelli (Co-PI)

$416,568.00, 2021-2024
This project investigates design processes where the unmet needs of users are elicited from social media, online forums, and e-commerce platforms, and translated into new concept recommendations for designers using artificial intelligence (AI). The motivation stems from the growing abundance of user-generated feedback and a lack of advanced computational methods for drawing useful design knowledge and insights from that data. The research will establish a rigorous computational foundation that (1) enables large-scale elicitation of user needs from online reviews using advanced natural language processing (NLP) algorithms, and (2) translates the elicited needs into the visual and functional aspects of new concepts using novel generative adversarial networks (GAN) algorithms. The theoretical innovations will advance the fundamental understanding of how AI can augment the performance and creativity of designers in early-stage product development processes. This project will boost national competitiveness in innovation by creating tacit opportunities for designing innovative, inclusive, and competitive products. The convergent research team will create outreach initiatives for STEM students, teachers, and underrepresented minorities, and engage with industry and research stakeholders to ensure technology-market fit and successful dissemination. 
The overarching goal of this project is to establish a transformative, data-driven paradigm for empathetic design that augments the ability of designers to uncover and address the critical yet latent needs of users at scale. The project will create scalable and computationally efficient NLP algorithms that capture the needs of ordinary users from reviews, identify the underlying usage contexts, and infer extreme use-cases to facilitate latent need elicitation. Focus groups and interviews involving ninety design experts and crowdsourced evaluators will be conducted to test the first research hypothesis: The NLP algorithms elicit needs that are nonobvious, difficult to identify, and provide significant value and originality. The project will build novel GAN architectures and algorithms for generative design of form and function conditioned on the elicited latent user needs. New multimodal deep regression models will be developed to evaluate the quality of the generated samples based on user feedback on existing products. Laboratory studies involving fifty subjects and fifty evaluators will be performed to test the second research hypothesis: The GAN-generated design recommendations significantly improve the quality and variety of the design concepts generated by human designers. The project will lead to broad societal outcomes by fostering designer-AI co-creation and innovation centered on empathy with users to bridge the gap between user need discovery and design outcomes.
Blank Space ()
(text and background only visible when logged in)

FW-HTF-P: Training an Agile, Adaptive Workforce for the Future of Manufacturing with Intelligent Augmented Reality

Mohsen Moghaddam (PI), Stacy Marsella (Co-PI), Kemi Jona (Co-PI), Alicia Modestino (Co-PI)

$149,999.00, 2020-2021
The future of the American manufacturing workforce faces a perfect storm of challenges: (1) a shortage of workers due to the retirement of the Baby Boom generation, (2) a shifting skillset due to the introduction of advanced technologies, and (3) a lack of understanding and appeal of manufacturing jobs among younger cohorts. Consequently, over 2.4 million U.S. manufacturing jobs are anticipated to be left unfilled by 2030 with a projected cost of $2.5 trillion on the U.S. manufacturing GDP. Augmented reality (AR) has been recently adopted for experiential training and upskilling of manufacturing workers. AR is proven to reduce new-hire training time by 50% through spatiotemporal alignment of instructions with worker experience. However, evidence suggests that overreliance of workers on AR scaffolds can cause brittleness of knowledge and deteriorate performance in adapting to novel situations. This project will investigate if and how AR can help manufacturing workers develop agility and adaptability on the shop floor while avoiding the risks associated with dependence on technology and stifled innovation. A new intelligent AR system will enable dynamic adjustment of AR instructions to worker task performance and enhance their ability to master complex tasks such as assembly and maintenance. This research will serve the national priority for rapid and lifelong upskilling of manufacturing workforce, especially underrepresented and under-served minority groups.
A convergent team of learning scientists, labor economists, cognitive psychologists, computer scientists, and manufacturing engineers will investigate three fundamental research thrusts: (1) Future work: Labor market analyses of changes in employer skill requirements will be conducted to understand the degree to which AR technologies have been introduced in the U.S. and the skillsets workers will need in future factories. (2) Future technology: An intelligent AR system will be devised to understand, predict, and guide the behavior of AR-supported workers through adaptive scaffolding of instructions to their performance and level of expertise. (3) Future worker: Hypothesis-driven human-subjects research will be conducted to understand the impacts of adaptive AR scaffolds on worker performance, cognitive load, and learning. The overarching goal of this research is to balance the efficiency and innovation of future manufacturing workers by improving their ability to transfer the acquired knowledge and skills to new situations on the shop floor. Experts from industry, government, and academia will be convened in a multidisciplinary workshop to illuminate the potentials and risks of AR technology for training future workforce and bridging the skills gap in manufacturing.
Blank Space ()
(text and background only visible when logged in)

Augmented and Virtual Reality Tools for Workforce Training in Robot-enabled Manufacturing Techniques

Casper Harteveld (PI), Mohsen Moghaddam (Co-PI)

$1,600,000, 2021-2023
This project facilitates the coordinated transformation of the workforce in industry and government to ensure that the training and capabilities of skilled personnel in the government sector can adapt to and keep pace with the rapid advancements in advanced and additive manufacturing practices. My lab develops and tests novel augmented reality (AR) training tools to ensure that a broad base of personnel can be effectively and simultaneously trained in cold spray manufacturing and robotic spraying processes. The developed toolkit will serve as a template for future agile training platforms for deploying advanced additive manufacturing processes. Further adaptations of this flexible and recursive learning tool will refine it to make significant and ongoing contributions to best practices across the industrial base for Industry 4.0, creating immediate workforce training gains and establishing the foundation for long-term transformation of the defense and organic industrial bases.
Blank Space ()
(text and background only visible when logged in)

Developing Integrative Manufacturing and Production Engineering Curricula That Leverage Data Science

Sagar Kamarthi (PI), Jacqueline Isaacs (Co-PI), Xiaoning Jin (Co-PI), Mohsen Moghaddam (Co-PI), Kemi Jona (Co-PI)

$1,999,927.00, 2019-2024
This project will contribute to the national need for well-educated engineers and technicians in production engineering. It will do so by developing modular courses in data science for production engineering. These courses will be developed by Northeastern University in collaboration with MassBay Community College, four Manufacturing USA institutes, and three industry partners. The overall goal of the project is to design, develop, and deploy sustainable, online courses and curricula that bridge the production engineering-oriented data science skills gap of incumbent professional engineers and entering engineers and technicians. The program will address four groups of learners: working professionals who need to upgrade their data science skills; career-transitioning learners without manufacturing backgrounds; undergraduates who wish to minor in data science for production engineering; and two-year community college students preparing for either entering the workforce or a four-year college program. 
The project plans to develop: 1. A modular production engineering-oriented data science curriculum with seven courses that, in turn, are comprised of modules; 2. An online course/module recommendation system to help students determine which course or module best meets their needs and current skillset; 3. Credentials including certificates and a minor in data science for production engineering. The project aims to address the production engineering-oriented data science skills gap, thus helping to meet the demand for workers in manufacturing, which is estimated to have at least two million unfilled positions between 2018 and 2028.  The project has the following objectives: 1. Identify the data science skills gap of current and future production engineering workforce; 2. Develop modular courses enabled by interactive multimedia content with active learning via on-line labs; 3. Develop a course-module recommender system; 4. Deploy developed courses through the Open edX platform and Jupyter Notebooks; 5. Study the effectiveness of online courses using theories and tools of learning science; and 6. Rigorously evaluate program objectives and outcomes via the expertise of an external evaluator. The project will address research questions that align with the project objectives, such as whether learners? prior knowledge helps or hinders learning and what mechanisms best help learners acquire self-directed learning skills. The research component of the project will include a Design Based Research approach, which includes iterative formative assessment that drives ongoing curricular improvements. Research data will include results from assessments of the fidelity of the courses/curricula in meeting industry needs, student performance data, and data from surveys and psychometric assessments. The curriculum will be designed to accommodate students with differing needs by providing multiple credentialing opportunities from course auditing, certificates in data science for production engineering and applied data science for production engineering, and a college-level minor.
Blank Space ()
(text and background only visible when logged in)