I enjoyed the fun of prancing on giant tap dance piano pad and Dance Dance Revolution where you control music with your feet. Imagine the ability to create various sounds with your body motion and seeing colors and pattern projected on you and your surroundings.
I'll show how you can create interactive visual musical canvas. Using Walabot for body movement tracking and Amazon Echo for voice interface to control settings. WIndows PC outputs to a giant display or overhead projector to create sound, color, and shapes that responds to player’s position. It's like having a variable size track pad - configurable from size of a book to a large living room.
- Sounds random arpeggio of musical scale with body movement.
- Animated particles effect of varying size and color emanate from tracked player position.
- Use voice to configure sound and visual setting - change color, instruments, and music note scale combination
- Assign variable sized zone area with voice command to save configuration (short cut) - player can latter recall settings by re-entering the trigger area.
- Control play of midi sound files, drum loops, with voice
Using voice command will ease the calibration of mapping movement area to projected screen boundary. It can also simplify chord names selection without multiple buttons or UI menu navigation. For Example:
Alexa, tell V Synth: change sound to string change sound number to fifty change color to green switch music mode play track number three stop song
Configure Mode to Trigger Zone
Alexa, tell V Synth: mark zone mark position
As player moves around the area, arpeggio chord with ascending and descending tone based on scale mode will sound. Along with projected cyan and deep blue circles and lines.
Possible Interactive Application
- One man bands, Assign zones to areas to trigger different effects. seamlessly while performing by moving into these zones instead of stepping on switches.
- Body motion controlling stage lights.
- Easy Mapping movement area to projected screen boundary. Player says, "Alexa, tell Visual Synth start calibration". Then just starts walking around the projected area .
- Distributed processing - Using multiple devices receiving the same player motion data over local Wifi for specific function. For example, Raspberry pi that turn on lights or animatronics while another PC render the data at different angle for projection. Or mobile VR application.
- Portable positional sensor - Using 5V battery pack to power Walabot connected to Raspberry pi as a hub - broadcast Walabot detected motion coordinates over local Wifi to other fixed location devices.
Walabot is a radar imaging sensor that can track movement. It connects to PC via usb port. I installed the SDK and experimented with setting the resolution and range using the sample program. Running Python API for motion tracking to get the x, y coordinates of the player in rectangular movement area. Position the Walabot horizontally with antennae array facing toward the player.
Windows PC installed with following Python libraries implements a web server running web socket messaging system to allow different application to communicate. Check out this Websocket with Flask blog for more detail about inner workings of these libraries.
- Flask Ask - Allows easy scripting of Alexa response with templates.
- Flask Socket IO - For bi-directional communication between web clients and servers. A hub python script will run as the server for broadcast messages to multiple connected client.
- Gevent - For efficient event loop communications.
- Ngrok - Allow secure tunnels from a public endpoint such as internet to a locally running network service. This is needed for the voice command of Alexa Skills to communicate with locally running web server on Windows PC.
Game Engine used to power animated graphics and sound in this project. I used following library and plugins.
Audio Synth Library
I used CSharp Synth For Unity Library to create various sound. You can program using notes based on Midi protocol. It can load sound fonts for variety of instruments. Also it can play Midi files. I implemented following :
- Piano keyboard emulator that trigger different note by pressing middle rows on the computer keyboard.
- Keys to change different instrumental patches.
- Track pad that plays note scales based on X,Y movement.
- Playing several preset instrumental accompany track - generated using "Band in a Box" computer-generated accompaniment.
Socket IO to receive movement coordinates
Raw coordinates are passed through the hub and relay to the Unity 3d to control the cursor. Depending on software implementation, This can allow multiple clients to use and adapt raw data of Walabot for it's specific applications.
Visual feedback in Unity 3D
Using direction coordinates from Walabot. I've created particle emitters in Unity 3D and add following functions:
- A script will move the emitter based on player's position coordinates.
- If it receives a change instrument command, it'll switch on random sounds.
- Change color of particles when receiving a Style change command.
- Plot a circular region shape when receiving a Mark region command.
- Remove the marked region with Delete Zone command
Here are configuration and steps I used for my setup using Windows 10 environment. The
Packaged DEMO file in a zip folder is available for tryout at end of the page.
Unity 3D Setup
- Download latest version of Unity from unity3d.com
Audio Synth Library
- Download or clone CSharp Synth For Unity Library from Git Hub
- Open CSharpSynth project in Unity 3d
- Download and unzip the Visual Synth Unity Project from git hub
- Move VisualSynth into "CSharpSynth/Assets"
- Move "MIDI" folder into "CSharpSynth/Resources/MIDI"
- Open Scene file "Demo" in "CSharpSynth/Assets/VisualSynth"
Socket IO Setup
Download Socket IO (free) plug in for Unity from Unity Asset Store
- Modify the Url field in Socket IO component to match port number (5000)
Install Python SDK
- Download and Install the Walabot SDK
- Download WalabotAPI Python library
- Download and install Python 2.7 - Follow this tutorial to install PIP (Python Package Manager) on Windows
- Add Python executable to environment path variable.
Install dependent Python Modules
- Open Windows Powershell
pip install flask-socketio pip install flask-ask pip install gevent
- Download the SynthHub.py, templates.yaml and WalabotService.py python file in the attachment section and place them under the folder of your choosing - "Visual Synth"
- Unzip the ngrok.exe file
- Place the ngrok.exe in a folder
- Sign in or create a new developer account https://developer.amazon.com/edw/home.html
- Select Alexa Skill Kit and Get Started
- Select "Add a New Skill"
- Skill Information. Select Custom Interaction Model
- Goto Configuration
- Select https for Endpoint Type
- Paste in the output of ngrok https address
- SSL certificate, chose "My development endpoint is a sub-domain of a domain"
Unzip and click on VisualSynth.exe to start the GUI.
You can play with the sounds using the Keyboard shortcuts. Next, startup web socket io hub to receive controls from Walabot and Amazon Echo.
Start Up Instructions
Make sure you have set up the Python libraries environment and Amazon Voice Skills.
- Open a command line, Start ngrok with a port number (5000)
ngrok http 5000
- Sign into Alexa Developer Console Config.
- Copy the highlighted forwarding address from ngrok output and paste into the default Endpoint address .
- Start up the SynthHub python script. In command console and type
Hub will startup connected Walabot, and wait for clients to connect. You can start giving voice command from Alexa.
Alexa Skill ID: amzn1.ask.skill.8202d8f1-5f9d-43a8-bd37-4c346506fecc