the sync version seems more reliable:
class DiSTTSync(Skill):
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
MIN_ACTIVE_SECONDS = 0.5
exit_event = Event()
model = whisper.load_model("base")
p = pyaudio.PyAudio()
stream = p.open(...
1 piece version
import whisper
import pyaudio
import numpy as np
import re
import atexit
from threading import Event
from LivinGrimoire23 import Brain
from async_skills import ShorniSplash
"""
cmd
winget install ffmpeg
check if it installed ok:
ffmpeg -version
in python terminal:
pip install...
import whisper
import pyaudio
import numpy as np
import re
import atexit
from threading import Event
from LivinGrimoire23 import Brain
from async_skills import ShorniSplash
# befor runing the code:
'''
cmd
winget install ffmpeg
check if it installed ok:
ffmpeg -version
in python terminal:
pip...
import whisper
import pyaudio
import numpy as np
import re
import atexit
from threading import Event
from LivinGrimoire23 import Brain
from async_skills import ShorniSplash
# befor runing the code:
'''
cmd
winget install ffmpeg
check if it installed ok:
ffmpeg -version
in python terminal:
pip...
oh yeah, fully automatic. once a skill is added, it just works based on input.
many skills are automatic, some are triggered.
she can talk on her own volition for example.
no deepseek only replies to my input.
my STT is shit, I don't know why ngl.
my AI works like the matrix learn scene. 1 line of code to add a skill.
deepseek is just 1 possible skill.
for example:
brain.add_logical_skill(DiVoices())
is a skill that lets me change the output voice
or...