Instantly compose and generate MIDI signals by specifying given genres, song characteristics, etc.
Real-time control of tempo, instruments, number of notes, speed of development, etc. on a measure-by-measure basis
The generated music can be used to achieve the a desired effect based on real-time inputs such as body movements, biological data from sensors, changes in environment etc.
Instantly compose and generate MIDI signals by specifying given genres, song characteristics, etc.
Real-time control of tempo, instruments, number of notes, speed of development, etc. on a measure-by-measure basis
The generated music can be used to achieve the a desired effect based on real-time inputs such as body movements, biological data from sensors, changes in environment etc.
With the goal of enhancing human functions such as concentration and performance during exercise, we can provide dynamic music applications that change the content generated in real time in response to biological responses.
Music can be generated in hotels, offices, and other living environments to suit the situation at any given time. It is copyright-free and can be used without restrictions on media or location.
This product was used for Suntory’s special tea campaign website for a project to generate music that matches the user’s diet.

AI music generation site based on dietary data: SUNTORY TOKUCHA MUSIC (https://tokuchamusic.jp/)
A deep learning model based on Transformers and Recurrent neural networks for generating MIDI signals is constructed by learning from various musical pieces. The architecture is configured to accept not only changes in initial conditions but also changes in conditions during playback. The actual tone of the music played is selected from a synthesizer, sampler, or other sound source. Research has shown the positive effects of music on health, but how music specifically affects particular biological responses is an area that requires research and development on a case-by-case basis.


Licensing period: Monthly
Developer’s license: Yes
Input: Composition parameters (genre, tempo, instrument, number of notes, development, etc.)
Output: MIDI signal or WAV
Cloud computing: Standard API provided
On-premise environment: Possible by consultation
Real-time
Get in touch with us here!
CONTACT