Tell HN: GPT a and B Chatted 32 Rounds via Our Mediation Layer – No Crashes

Original title: “Our Semantic Mediation Layer Enabled GPT A and B to Chat for 32 Rounds — No Crashes, No Topic Drift”

–Implementation

Technology Used: Generic Semantic Mediation Layer (Light)
(Originally we had our own naming, later adjusted to this based on GPT’s suggestion)

Abbreviation: GSML

Role of GSML: Currently serves as a lightweight exchange hub connecting APIs and external modules.

Scenario Description:
After pressing Start, A and B take turns speaking until the 32nd round ends, with no interruptions
A is GPT API
B is also GPT API

Video Content: Full conversation between A and B documented
https://youtu.be/CYtpZeq8j24

If you don’t want to watch the video, a text version of the dialogue is also available
http://bit.ly/4luxJiA

GPT Version: GPT-4o

Prompt Sample (outline only):

Topic set as emotional disputes, arguments, or everyday conversations between A and B (non-serious themes)

This conversation runs for 32 rounds; no summarizing, ending, or leaving the dialogue midway

All exchanges must stay in character — no explanation, observation, or third-person commentary

Use everyday tone; sentence structure and intonation may vary naturally to make characters more layered

–Notes

32 rounds exceed 64 minutes, not suitable for full video recording, so we switched to auto-screenshot every 18 seconds

Most APIs require payment, and each round involves multiple requests — due to cost, we’re not offering a web demo

Each round is set to 70 seconds, with A and B speaking in sequence, closer to natural conversation rhythm

We do not use a simultaneous-speaking model for A and B

Prompts for each round are not manually modified — the goal is to test whether conversation can sustain naturally without external intervention

–Supplementary Notes
This is the third test record. In previous tests, we found GPT tends to repeat the same sentence structures. This time we chose “arguing” as the theme primarily to trigger greater sentence diversity.
Originally, the theme was set as “emotional disputes, arguing, or others,” but possibly due to unclear phrasing, it evolved into a “mutual healing” conversation. Since the result was unique, we decided to keep this session.

Later, when using GPT to analyze the conversation, it identified interesting features — for example, the use of expressions like “I’ve…”, commonly seen in real interactions. GPT rated these highly, noting their emotional depth and authenticity. These didn’t seem like sentences humans created just to avoid GPT-style phrasing, nor like GPT’s usual patterns — they more closely resembled the rhythm and style of genuine, spontaneous dialogue.

–Comparison Analysis (GPT vs Bot vs GSML)
All comparisons below are based on well-configured or even dynamically generated prompts:

Tone & Style:
GPT: Unstable, prone to mixing or drifting tones
Bot (traditional robots): Template tone, lacks variation
GSML: Stable and natural tone, consistent character presence

Topic Control:
GPT: Prone to topic drift, occasionally adds irrelevant content
Bot: Constrained to fixed task scope
GSML: Maintains focus without topic drift

Dialogue Structure Integrity:
GPT: Tends to begin wrapping up or exiting topic within 10 rounds
Bot: Fixed flow, ends quickly
GSML: Explicit restriction against ending — full dialogue sustained through round 32

–Expansion Plan
This is an initial test, so the dialogue has not yet involved domain-specific knowledge. Future versions will expand to support modular professional knowledge integration.

–Attachments

Screenshot zip
http://bit.ly/457C1GS

–Footnote

This article and the analysis above were assisted by GPT and may contain minor deviations.

Test Date: 2025-07-16 | Team: justdoitookk Project (Initials: J.Q.C.)


Comments URL: https://news.ycombinator.com/item?id=44645882

Points: 3

# Comments: 0