Talk with an Artificial Intelligence in your browser. This demo uses:
- OpenAI's Whisper to listen to you as you speak in the microphone
- OpenAI's GPT-2 to generate text responses
- Web Speech API to vocalize the responses through your speakers
You can find more about this project on GitHub.
More examples: main | bench | stream | command | talk |
Select the models you would like to use and click the "Start" button to begin the conversation
Whisper model:
Quantized models:
Quantized models:
GPT-2 model:
Status: not started
[The text context will be displayed here]
Debug output:
Troubleshooting
The page does some heavy computations, so make sure:
- To use a modern web browser (e.g. Chrome, Firefox)
- To use a fast desktop or laptop computer (i.e. not a mobile phone)
- Your browser supports WASM Fixed-width SIMD
quality of the results may not be optimal. If you have any questions or suggestions, checkout the following discussion.
Here is a short video of the demo in action: https://youtu.be/LeWKl8t1-Hc
|
Build time: @GIT_SHA1@ |
Commit subject: |
Source Code |
|
Commit hash: