AI Just Got Seriously Disturbing 🤖🤯 Future Shock?

May 12, 2026 |

AI

🎧 Audio Summaries
English flag
French flag
German flag
Japanese flag
Korean flag
Mandarin flag
Spanish flag
🛒 Shop on Amazon

🧠Quick Intel


  • Thinking Machines Lab (TML) was founded last year by Mira Murati, formerly of OpenAI.
  • The company is developing “interaction models” aiming for AI that can interrupt users, shifting from the traditional “you talk, it listens” model.
  • TML-Interaction-Small responds in 0.40 seconds, matching natural human conversation speed and outperforming OpenAI and Google models.
  • The company’s approach is described as “full duplex,” processing input and generating responses simultaneously.
  • A “limited research preview” of the interaction models is planned for release in the next few months.
  • A wider release of the interaction models is scheduled for later this year.
  • 📝Summary


    Thinking Machines Lab, established last year by Mira Murati, revealed its “interaction models” on Monday. The company is exploring AI capable of interrupting users, moving beyond the traditional input-then-output approach. They’ve developed a “full duplex” model, TML-Interaction-Small, which responds in 0.40 seconds – mirroring natural human conversation speeds and surpassing OpenAI and Google’s models. This research preview indicates a shift toward simultaneous processing and response generation. A limited research preview is slated for the next few months, with a broader release anticipated later this year.

    💡Insights



    TML-INTERACTION: A Paradigm Shift in AI Interaction
    Thinking Machines Lab, a recently established AI startup led by former OpenAI CTO Mira Murati, has unveiled its groundbreaking approach to artificial intelligence – interaction models. This innovative technology fundamentally challenges the conventional method of AI interaction, where users engage in a sequential exchange of prompts and responses. Instead, Thinking Machines is developing a model capable of processing user input and generating a simultaneous response, mirroring a genuine, real-time conversation. This “full duplex” system represents a significant departure from existing AI models, aiming for a more fluid and intuitive user experience akin to a natural telephone call, rather than a traditional text-based exchange. The company’s initial focus is on the TML-Interaction-Small model, which demonstrates impressive speed, responding in just 0.40 seconds – a pace comparable to human conversation and considerably faster than current offerings from OpenAI and Google.

    Technical Specifications and Initial Performance
    The TML-Interaction-Small model’s impressive speed stems from its unique architecture, designed for simultaneous input processing and response generation. Crucially, it’s important to recognize that this represents a research preview, not a commercially available product. Thinking Machines intends to initially release a “limited research preview” within the coming months, offering access to a select group of researchers. A broader public release is planned for later this year. The company’s performance data, measured at 0.40 seconds for response time, positions TML-Interaction-Small competitively against industry leaders like OpenAI and Google, suggesting a potentially transformative impact on the speed and efficiency of AI interactions. Further testing and refinement will undoubtedly occur as the model moves towards wider availability.

    Future Development and Release Strategy
    Thinking Machines’ strategy centers on iterative development and controlled release. The initial limited research preview will provide invaluable feedback and allow the team to further optimize the TML-Interaction-Small model. The company’s stated goal is to evolve the model based on real-world usage and research findings, ultimately leading to a fully functional product. While the precise details of the wider release are still under wraps, the planned timeline indicates a commitment to delivering a robust and impactful AI interaction experience to the public within the latter half of the year. The focus remains on refining the technology and ensuring a seamless transition from research preview to a commercially viable solution.