Nurturing talents and professionals for the digital age

Release Date:Dec 19, 2025
In this project, we developed an iOS application called PredicTalk, designed to work with AR glasses to support smoother, more fluent conversations. PredicTalk displays real-time suggestions for possible next phrases based on what the user has just said. By tapping the AR glasses, users can cycle through alternative suggestions. The timing of suggestions can be adjusted via the settings, allowing the app to be tailored to individual preferences and conversational contexts.
During development, we addressed delays in the delivery of the AR glasses by creating a custom test environment. We also improved the app’s design and usability by incorporating feedback from real users. This system is not only useful for practicing English conversation but also applicable to multilingual communication and presentation preparation.
Throughout the development process, test users gave highly positive feedback, and we aim to deliver a more natural speaking experience to a wider audience.
Creators: AOHARA Hikaru, ITO Asahi
An application designed to support foreign language vocabulary learning by systematically organizing information on the etymologies of English words. Multiple etymological data sources are summarized in a graph format, allowing for efficient machine processing. Using this dataset, a transformer-based model is trained to learn the relationships between word spellings and their etymological terms. Furthermore, a method has been developed for generating and displaying multiple etymology networks simultaneously, enabling users to compare words that share common roots. This functionality has been implemented in a publicly accessible web application.
Creator: NAKAZAWA Masaki
We developed Synergetica, the world’s first integrated development environment (IDE) that supports the entire workflow of artificial gene circuits, —from circuit design of synthetic gene networks to automatic DNA sequence generation.
Traditionally, designing artificial gene circuits required sketching out circuits by hand and running simulations across multiple, non-integrated tools.
By contrast, Synergetica consolidates all of these steps into a single platform, dramatically boosting development efficiency.Synergetica places great emphasis on UI/UX, offering two complementary design modes: an intuitive, GUI-based editor and a programming-style domain-specific language (DSL).
This dual approach enables users without deep biology knowledge to construct artificial gene circuits and instantly observe their behavior.
Through Synergetica, we aim not only to enhance research workflows but also to make synthetic gene circuits more accessible, thereby accelerating the democratization of artificial-life system development.
Looking ahead, we plan to add features that use experimental data to continually refine and improve our simulation engine’s performance.
Creator: OKUDA Sota, HOKAO Koki, SUGA Kanta
We developed a “bowel movement analysis device to improve the intestinal environment of families.” The device we developed, “Wunpost,” is a system that automatically analyzes the state of bowel movements using a device installed in the toilet, thereby supporting family health management. While wearable devices and tracking of diet and sleep have become common health management tools in recent years, we recognized that the impact of gut health on overall health remained an under-recognized area, and we aimed to address this societal challenge through our project.
Creators: KUDO Shio, HONDA Takuto
Monokko is an augmented-reality system that “agentifies” everyday objects, nurturing a sense of attachment between users and the things around them. Our living spaces already contain countless items tied to emotions and memories; Monokko reimagines these not as mere tools but as “co-habiting agents.” Thanks to the emergence of high-performance XR devices such as the Apple Vision Pro, it is now realistic to endow physical objects with digital personalities without adding any dedicated hardware.
The system lets users interact with objects naturally in daily life, encouraging attachment through visual overlays and chatbot-driven conversation—again, without physically altering the objects themselves.
To deepen this world, we also created a companion device called Cubun: a cube-shaped character that awakens when touched, greets the user via speech synthesis, and shares each dialogue with the backend, storing it as episodic memory.
Together, these interactions build a small social ecosystem within the home.
Creator: GOTO Taisei, OTSUKA Toshiro, ISHIYAMA Ryo
We developed a system that extends the human body image into wings, enabling exhilarating flight experiences in virtual space. We created a body image extension system for wings, a wing control system and aerodynamic simulation system, and a virtual space optimized for flight experiences. Notably, we designed the system to allow users to progressively acquire the body image extension to wings through three stages: operating real wings in the real world, operating virtual wings in the real world, and operating virtual wings in the virtual world.
Creator: TANIZAWA Kenta, KAI Kiichiro, SAKAI Kou
In this project, a kendo training support system was developed using Mixed Reality (MR) technology. The objective was to recreate the essential element of “qi” in kendo and to create an environment that enables practitioners to engage in self-reflective training. By employing MR devices such as the Meta Quest 3, the system captures practitioners’ movements in real time and provides multisensory feedback. Furthermore, by incorporating insights from experienced kendo practitioners, the system integrates traditional kendo principles with contemporary MR technology.
Creators: FURUTA Karen
Re‑MENTIA, an agent‑based virtual assistant that helps people living with dementia (PLwD) live in ways that reflect their values and preferences. The assistant provides companionship and step‑by‑step guidance during multi‑step everyday activities—for example, getting ready to go out or preparing meals—so that people can remain as independent as possible. Key capabilities include personalized support informed by family‑provided information and naturalistic dialogue that emulates the communication style of care professionals. Inspired by the idea of moving from “dementia” to “re‑mentia” (restoring abilities), the project takes a strengths‑based approach that leverages remaining abilities and supports independent living.
Creators: MIYASHITA Takuma
Many Japanese speakers depend on kana–kanji–based input systems, yet such tools still struggle with long-context understanding and true personalization.
Our project tackles those gaps by delivering a highly accurate, user-optimized typing experience built around three core components: Zenzai, an end-to-end neural kana–kanji conversion system; Tuner, a personalization module; and azooKey, a macOS IME that seamlessly integrates them.
Zenzai employs a lightweight GPT-2–based language model for strong contextual understanding, while GPU acceleration and algorithmic refinements let it run smoothly even on Apple M1-class machines. Tuner gathers on-screen text to build a user-specific language model, ensuring that each person’s unique vocabulary and phrasing are reflected in the conversion results. By combining Zenzai and Tuner, azooKey achieves highly efficient input. It also offers an “Magic Conversion” feature that calls external LLM APIs to provide emoji suggestions or translation, and style transformation through a single UI.
azooKey is already publicly available for macOS and ready for anyone to use. Through this effort, we aim to advance Japanese input technology and user experience on both the research and practical fronts.
Creators: MIWA Keita, TAKAHASHI Naoki
The Agriswarm project is developing a drone system as an alternative to honeybees for fruit tree pollination.
Agriswarm leverages the Jetson Orin platform, visual SLAM using cameras and an IMU for navigation, and obstacle-avoidance algorithms.
Pollination is performed via a specialized mist-spraying system, with precise flower targeting enabled by machine learning techniques. Data collected during pollination flights is used to generate 3D models that support orchard management, streamline operations, and enable data-driven decision-making.
Creator: ARITA Tomoki, WADA Yuiga
In this project, I aim to create a new culture by incorporating slime molds, single-celled organisms, into everyday life and engaging in creative activities together with them. Slime molds are single-celled organisms that inhabit damp environments such as the undersides of fallen trees, stumps, and fallen leaves. They are known for their ability to move in response to stimuli such as food and light and for exhibiting intelligent behavior comparable to solving mazes. They have attracted attention in various fields, including research on maze-solving that won the Ig Nobel Prize, bioart, and Human-Computer Interaction (HCI) studies.
Creator: SAKODA Kaito
4D fabrication is a manufacturing method that incorporates time-dependent deformation of manufactured material. 4D fabrication has attracted much attention, but its application is limited due to its lack of mass production capability.
To solve this problem, this project (1) improved the conventional method by reviewing materials and processing methods, and (2) proposed a new 4D fabrication textile using a Jacquard weaving machine.
In Jacquard weaving, we applied the pintuck weaving technique to create "forming patterns," a series of sculptures with a three-dimensional expression by embedding hinge structures in the textile.
I enthusiastically tackled the problem of how to create three-dimensional and sharp expressions using the soft textiles.
Creator: KAMIJO Haruto
This project developed Rusty Lantern, a next-generation machine learning library that is both type-safe and cross-platform compatible.
In today’s AI and ML landscape, the field is increasingly dominated by a few major vendors. Many existing libraries are subject to significant vendor lock-in, and implementations often depend heavily on specific ecosystems.
In response, I proposed Rusty Lantern as a bold, next-generation ML library that addresses multiple issues simultaneously.
To overcome the challenges of environment dependency and runtime errors common in current neural network development, I designed Rusty Lantern from the ground up using the Rust programming language. It leverages WebGPU as the primary computation backend to achieve high portability and also supports CPU as an alternative backend.
By utilizing Rust’s powerful type system for compile-time type checking, the library minimizes runtime errors and enhances development efficiency.
In addition, I implemented advanced peripheral features for ML library development, including a visualization and debugging tool called LanternBoard—a GUI debugger that significantly reduces the developers’ debugging workload.
Creators: DOMOTO Masahiro
Modern kernels are facing an expanding Trusted Computing Base (TCB), significantly increasing security risks.
Microkernels solve this issue by minimizing privileged code execution, thereby enhancing security. Additionally, employing an object-capability model strengthens security while preserving flexibility.
A9N Microkernel, a third-generation microkernel incorporating these principles, achieves the world's fastest IPC performance in practical implementations.
To support secure system construction, we've developed a comprehensive ecosystem. This includes the Nun OS Framework and bootloader for OS development on A9N, as well as liba9n, a Modern C++ library enabling functional-style error handling.
Creators: IGUMI Rekka
In this project, I developed a scenario test creation tool that supports editing via both source code and a Graphical User Interface (GUI). By representing scenarios using a graph structure, the tool enables intuitive management of test scenarios. Furthermore, it achieves bidirectional synchronization with the YAML format, allowing not only experienced engineers but also beginners and non-engineers to participate in test scenario management. The tool also adopts a design philosophy optimized for automatic execution on Continuous Integration (CI) environments. By integrating with an existing open-source software (OSS) test runner, such as 'runn', it achieves integration into the broader software testing ecosystem.
Creators: IKEOKU Yuta
In this project, I developed a system called “Fluss” to facilitate the development of software for real-time CG (computer graphics) technology-based video performances known as generative VJ. I designed a node-based programming environment that allows non-programmers and artists to implement programs for generative VJ expression, which is a key feature of this project.
Creator: SHIINA Kanta
In this project, I developed a cloud-based proxy from scratch to facilitate the introduction of zero-trust security, which addresses the shortcomings of traditional perimeter defense. Unlike existing policy description languages, this proxy is designed to work with large language models (LLMs) to enable the description of freely written control policies in Japanese.
Creators: KOBAYASHI Rintaro
In recent years, the widespread adoption of 3D CAD, the expansion of the VR/AR market, and the growing availability of low-cost 3D printers have dramatically increased opportunities for individuals to engage in 3D modeling. Consequently, 3D mice—left-hand devices used to control the camera viewpoint during modeling—have attracted renewed attention.
However, existing 3D mice are expensive, offer little product variety, and their joystick-style operation has a steep learning curve, making them difficult to introduce to beginners and educational settings.
To address these issues, we developed ParRot, a trackball-based 3D mouse that enables intuitive manipulation. By mapping the trackball’s three-axis rotation directly to on-screen camera rotation, ParRot delivers highly intuitive control. We also created software add-ins compatible with major modeling tools and built ParRotNest, a browser-based web app that lets users edit and share device settings.
The hardware, firmware, and add-ins are all released as open-source on GitHub. In addition, we launched a Discord community, which has grown to over 180 members as of May 2025.
By combining ParRot’s low cost and intuitive operation with an ecosystem spanning the device, its configuration app, and the user community, we provide an environment where anyone can easily enter the world of 3D modeling, share expertise, and collectively evolve the device itself.
Creator: NISHIMURA Hinata, TAKAHASHI Iori, NARUSE Hiroaki
In this project, I developed a system that collects and analyzes a large amount of IoT device firmware, and cross-references it with data from honeypots and other cyber-attack monitoring systems to identify devices targeted by unknown zero-day attacks at an early stage. The system has several key strengths. First, by collecting over 170,000 firmware samples, it can cover a wide range of IoT devices. Second, by combining both static and dynamic analysis, I significantly improved the speed and coverage of identifying affected devices.
This system enables the early identification of IoT devices affected by zero-day attacks and facilitates prompt reporting to the relevant vendors or developers.
As a result, it contributes to the rapid release of firmware updates and security patches to address the vulnerabilities.
Creator: KUKI Ryu
In this project, we developed a platform that allows users to freely create their ideal digital pets and keep them with long-term attachment. While conventional methods mainly involved selecting pets from pre-set options, we utilize smartphones as the "brain," enhancing emotional expressions and reactive behaviors while keeping development costs low. Through a dedicated app, users can freely choose character appearances, facial expressions, and motion parts to customize their pet's personality. The platform features a mechanism where emotions and behaviors change probabilistically and over time according to user interactions and device sensor inputs. Furthermore, users can experience growth and evolving relationships with their pets in response to the passage of time and their interactions. By leveraging smartphone cameras and sensors, the digital pets can perceive real-world information and respond realistically. Customizing the pet's appearance and motions via interchangeable parts is also easy, offering a new experience of raising a truly original pet.
Creator: KONNO Yusei, KAWAJIRI Chiharu, YOSHIDA Kaito
RISC-V an open-source instruction set architecture, modularizes its specification into units called 'extension'. Designers can create custom specifications by combining these extensions, incorporating only the necessary functionalities.
Extensions are being actively developed, with 122 established by the end of 2024. However, my research indicates that the number actually in use is limited to only 35.
To address this situation, this project has developed a system that utilizes a hypervisor to emulate the behavior of these extensions, enabling users to easily trial them in their local environments. As extensions and modules have a one-to-one correspondence, users can prepare their environment simply by installing the desired modules.
This system not only facilitates the utilization of previously unused extensions but also promotes feedback by providing an environment where software developers and users can readily try out extensions, thereby addressing the fundamental issue of an inefficient development cycle.
Furthermore, in addition to the hypervisor, the project also involved developing tools that automatically generate decoders and modules for emulating extensions, further simplifying their adoption.
Creators: TAKANA Norimasa
Dec 19, 2025
Released this page