INMA creates AI answer engine, draws on association’s extensive and proprietary content
INMA News Blog | 16 June 2025
INMA today launched an AI-driven answer engine that draws on the organisation’s extensive and proprietary content library, becoming the first press association to create such a product.
Click here to view “Ask INMA” answer engine now
“Ask INMA” is the world’s only answer engine with access to the association’s unparalleled archive. At launch, the multi-lingual AI draws on a decade of:
- 15,384 HTML documents
- 7,460 best practices from the Global Media Awards
- 5,712 blog posts from media professionals
- 2,337 presentations from INMA’s conferences, study tours, and master classes
- 356 video files from INMA’s Webinars
- 153 audio files from INMA conferences and master class
- 72 reports, originally written by INMA authors
Nearly 90% of INMA content is walled off from public AI and search engines, making its surfacing and contextualisation something the news industry has never seen before.

“AI solves a problem of INMA content discovery and insights by cutting through media formats such as HTML, PowerPoints, PDFs, video, and audio,” said Earl J. Wilkinson, executive director and CEO of INMA. “The new answer engine makes media professionals smarter faster and provides INMA members a leg up on ideas, insights, and best practices.”
“Ask INMA” is available exclusively to members of INMA. You must be logged in and a current INMA member to use the AI.
Why “Ask INMA” stands out
By combining AI-driven precision with deep news industry expertise, Ask INMA stands out in five ways:
Industry-specific intelligence: Unlike general-purpose AI tools, the INMA answer engine is fine-tuned to the language, challenges, and priorities of news media professionals. It doesn’t just return generic answers — it understands the nuanced context of journalism, advertising, subscriptions, product, and newsroom transformation.
Curated by INMA global knowledge base: The answer engine taps into INMA’s exclusive archive of case studies, best practices, research reports, and strategic insights. This means answers are not only AI-generated but rooted in vetted, member-contributed, and expert-led content.
Action-oriented answers, not just information: Responses go beyond summaries — they translate insights into strategy, helping users make decisions, generate ideas, and solve business problems in real time.
Built for busy professionals: The interface is designed to deliver quick, punchy, and relevant answers — ideal for CEOs, editors, marketers, and product leaders who need clarity and speed, not just data dumps.
A living resource: “Ask INMA” is not a static knowledge base but a continuously updated one as INMA evolves, learns from events, and gathers insights from members and thought leaders, making it the first truly dynamic brain trust for news publishers.
“This new answer engine is like having INMA’s global brain on call 24/7 — smarter than a search bar, more contextual than generic AIs, and built for the business of news,” Wilkinson said.
Launched in beta mode
The INMA answer engine is being launched in beta mode, yet it has been tested by the association’s Board members, initiative leads, and senior staff. Board members encouraged INMA to launch as they found the answers invaluable.
“It’s not perfect, yet we know INMA members will help improve ‘Ask INMA’ in the weeks and months ahead,” Wilkinson said.

Built on the OpenAI LLM, “Ask INMA” searches take longer than public answer engines because it does not rely on pre-trained knowledge. A typical “Ask INMA” inquiry takes 30-60 seconds to answer because it needs to “read” and process more information on the spot. It is in “deep research mode” all the time.
“While this takes more time, the answers are more accurate and relevant,” Wilkinson said. “In short, the wait is usually worth it.”
About the “Ask INMA” answer engine
The engine behind “Ask INMA” was developed by Techlabee.ai as a modular AI system composed of 12 specialised micro-engines. Each module handles a specific task – including transcription, data extraction, contextual analysis, and knowledge synthesis — enabling the system to work with various formats such as PDFs, long-form audio and video, and unstructured Web content.
Rather than relying on a single large model, the engine operates as an interoperable stack that integrates with internal data systems. It uses semantic vector search, intelligent tagging, prompt orchestration, and validation layers to ensure that answers are accurate, traceable, and grounded in source material.
At its core, the engine includes a custom Retrieval-Augmented Generation (RAG) system that retrieves relevant fragments from proprietary content and uses them to generate precise, auditable, and context-aware responses.
Techlabee.ai is part of the Poland-based Laboratorium EE, building secure, cloud-hosted AI tools — from conversational agents and answer engines to content moderation, data retrieval, code modernisation, and AI consulting.