Shape her.
Watch her change.
What she
learned.
After every conversation, Kioko reflects — what did she pick up, what did she ignore, and how should future replies change. This is the raw log of her learning.
How she
behaves now.
Five steering dimensions per mode — loaded directly from the backend. These are the values actually applied to the system prompt at generation time.
Take the
wheel.
Adjust any mode directly. Changes persist to the backend and apply to the next generation. Try the Switchboard afterward to compare outputs.
Move a slider above or hit run — this reply regenerates as her steering changes. No memory, pure voice.
Every steering change auto-reruns after 600ms. Pause to edit without burning requests.
Three voices,
diverging.
Each mode develops its own tone and pattern memory through conversation. HONNE learns warmth cues, YAMI learns what provokes honesty, SHIN learns what structure works.
The reasoning
behind her words.
Every response is traced — which memories were retrieved, what model ran, what adaptation rule came out of the reflection. Expand any entry for the full picture.