Nezasa

Increase Team Efficiency with an AI-powered Inhouse Knowledge Chatbot

In an era where Large Language Models (LLMs) are transforming how we interact with data and digital tools, there’s a growing interest in using these technologies to address real business needs. When ChatGPT started to trend last year, we launched this project to show how anybody can easily leverage the technology.

Beginning with this article, we start an article series about the project, discussing concepts, implementation, and deployment aspects.

The project we have chosen addresses the inefficiency of teams having the same questions asked over and over again. It keeps core team members busy answering questions and delays strategic projects they should be pushing forward. This inefficiency is addressed by leveraging existing resources like Zendesk KB, Confluence, or Slack channel histories and combining them with the cutting-edge capabilities of LLMs.

About us: Patrick is a co-founder, former CPO, and, at the moment, the Product Manager for Data & AI at Nezasa. Denis is the founder of SamuylovAI and a strategic consultant with over 12 years of practical experience in advancing data-driven innovation in business and product development.

Series Outline

We split the article series by logical project work packages to guide you, as a reader, through the way we have faced the project. The series is anticipated to span 3–4 months, releasing the following articles on the way:

  1. Introduction to the Project — This article.
  2. Design & Technical Options — Explores the solution space.
  3. Data Extraction & Preparation — Describes how we get the data in.
  4. Our Solution Design — Explains the solution design.
  5. Chatbot Integration Options — Illustrates how we put the pieces together.
  6. Deployment and Future Considerations — Reflects on the deployment process and discusses future improvements.

We are excited to share our journey and insights on this project.

Project Goals

During one of our data/AI talks last year, we wondered why their adoption and practical utilization are not faster despite the broad recognition of AI and Large Language Models (LLMs) as revolutionary instruments. That’s why we decided to do this project: to show our fellow team members, customers, and partners that it can be simple to launch an initiative using LLMs and, by doing so, address problems that, so far, were not that simple to solve.

In that sense, we scratched on a bit of paper the following goals for the project:

  1. Demonstrate practical value for users of incorporating LLM-powered solutions into everybody’s processes.
  2. Explain core concepts and how they would apply to other types of problems than our example.
  3. Create a real-life example that illustrates how to solve one specific problem and can be deployed for end users.

Besides the official goals, we also had some personal motivations:

  • Patrick: “As a co-founder, I’ve collected much knowledge over the years, and quite some of it has been written down. People like to ask me because either I know the answer myself or I know where it is documented. Thanks to the chatbot, I’ll focus more on the new and interesting questions the chatbot cannot answer and all the repeating questions I don’t have to answer anymore.”
  • Denis: “Together with Patrick, we have built the data infrastructure at Nezasa to support business intelligence initiatives. Now, it is possible to centrally access data from multiple systems in a standardized way — a task that is difficult in many organizations but essential for facilitating information discoverability. Given this robust foundation, I was motivated to demonstrate that with the right tools, one can develop data-driven solutions within days, not months.”

Addressing an Actual Business Need

Introducing new technologies and convincing stakeholders works best by quickly demonstrating value and solving users’ real-world problems. In Nezasa, like in many product companies, large parts of product and tech knowledge are harbored by its product management and engineering teams. While a critical asset, this knowledge often becomes the center of inefficiencies as key personnel are bogged down by repeating questions (and thus don’t work on driving strategic tasks such as the product roadmap).

On the other hand, and just as impactful for the business, information seekers need more time to receive responses. Imagine support agents or customer onboarding managers. The quicker they get their answers handled internally, the quicker they can stand out with an excellent and speedy response to the customers, making the customers happy and successful.

That means, and it is the case in Nezasa, that despite a culture of thorough documentation and communication, the potential to streamline knowledge access and sharing through technology remained untapped.

Integrating a knowledge chatbot powered by the advanced capabilities of LLMs can address this business need effectively. By enabling natural language queries against a rich database of documentation, specifications, and communications, such a knowledge chatbot can liberate team members from the repetitiveness of Q&A cycles while significantly reducing wait times for information across the board. From our own experience, we believe an excellent internal knowledge chatbot can considerably improve any company’s execution capabilities.

High-Level Project Requirements

Any good project needs a solid definition of a scope. Here is what we defined as the scope for the prototype:

1) Prototype

Design and implement a functional prototype of a Nezasa-internal knowledge bot leveraging LLMs. It must be possible for any Nezasa employee to query the main knowledge resources using natural language.

2) Data

Must-Have

  • Product Knowledge Base on Zendesk — represents the most curated feature descriptions of Nezasa’s official products.
  • Company wiki on Confluence — holds feature specifications, documentation of technical architecture, meeting minutes, and much more.

Nice-To-Have

  • Slack channel #product-questions and #datalabs-questions — both channels are Q&A channels monitored by Product Management and Engineering
  • Internal Tickets on Jira — hold value discussions and decisions
  • Customer Tickets on Zendesk — hold value problem statements and solution descriptions

3) No additional resources

Do not require additional resources, especially engineering ones from Denis’ or Patrick’s team.

4) Build on existing infrastructure

Add new technology where needed, but stay in the scope of the infrastructure Nezasa already has running.

5) Do good (and cool) stuff and talk about

Release a series of articles to share/document what was done.

A remark about the data scope: when it comes to data, data sensitivity and leakage must be a big concern in any company. We’ve addressed this by defining that we’ll create a company-internal bot for the employees. Nevertheless, we’ll focus on sources with much lower data sensitivity, e.g., the product knowledge base is public, and most of the wiki can be accessed by all Nezasa.

The Status Quo — Build on Existing

Luckily for us, we have been working together on Nezasa’s data infrastructure last year. That means we know it inside-out, and we know it is ready. The data foundation we have built can easily be extended to cope with the requirements of this project: it will be very straightforward to get all required data from the sources and make them accessible to the chatbot solution.

Nezasa’s data infrastructure is already set up as an ELT data pipeline. DBT is used to define transformations, Fivetran and Airbyte are available as ELT tools, and Snowflake is our data warehouse. The steps that remain for us to cover in the project are:

  • Integrate the data sources the knowledge chatbot requires (Zendesk, Confluence, …)
  • Implement extensions of the data pipeline (DBT, SQL) to normalize, merge, and format the data from different sources
  • Create credentials for the KB Bot to access the data
Data Infrastructure (in blue, what will be added for the KB bot)

It is also worth mentioning that thanks to reusing the existing data pipeline, we get automated data updates from the sources into Snowflake for free. We’ll only have to update in addition the metadata required by our chatbot for data retrieval in an automated way. But more about that later in the article series.

Wrapping Up & Coming Next

As we wrap up this introduction, we’ve set the stage for diving into the nuts and bolts of creating a knowledge chatbot. We’ve outlined the project’s goals and shared how this initiative will use LLMs for practical business benefits at Nezasa.

Next, in our series “Design & Technical Options”, we’ll explore the solution space available to us and explain certain base concepts one should understand when working with LLM-based solutions.

Stay tuned for insights and lessons learned as we navigate through the creation of this cutting-edge project, aiming to revolutionize internal knowledge sharing.

Follow Us

If you are interested in more content about data-driven solutions, we invite you to follow our other social media:

(PS: Denis is one of the organizers of the conference “Generative AI at GenAI Zürich 2024” — follow LinkedIn and website — don’t miss out on early bird sale until April 10th.)

FAQ

Q1: What is a knowledge chatbot and how does it increase team efficiency?

A1: A knowledge chatbot is an AI-driven tool that uses large language models to answer queries by accessing a comprehensive database of a company’s internal documents and communications. It increases team efficiency by automating responses to frequently asked questions, thus freeing up key personnel to focus on more strategic tasks instead of repetitive information sharing.

Q2: Why are LLMs considered revolutionary for business applications?

A2: Large Language Models (LLMs) like ChatGPT are considered revolutionary because they can process and understand vast amounts of text data, provide insights, and automate interactions in a natural, human-like manner. This capability can transform business operations by enhancing decision-making, speeding up information retrieval, and personalizing customer interactions.

Q3: How does the proposed Nezasa-internal knowledge bot work?

A3: The proposed Nezasa-internal knowledge bot is designed to interface with existing data infrastructure like Zendesk, Confluence, and internal communication channels. It uses natural language processing to understand and fetch information from these sources, enabling employees to query the main knowledge resources using natural language, thereby streamlining information retrieval.

Q4: What are the main data sources for the Nezasa knowledge chatbot?

A4: The main data sources for the Nezasa knowledge chatbot include the product knowledge base on Zendesk, the company wiki on Confluence, Q&A channels on Slack, internal tickets on Jira, and customer tickets on Zendesk. These sources provide a rich dataset for the chatbot to draw from, ensuring comprehensive coverage of company knowledge.

Q5: What are the anticipated benefits of integrating a knowledge chatbot within Nezasa?

A5: Integrating a knowledge chatbot within Nezasa is expected to reduce the time spent by key personnel on repetitive queries, decrease the wait time for information retrieval across departments, and enhance overall productivity. Additionally, it aims to promote a more efficient knowledge sharing culture and support faster decision-making processes.

Q6: How does Nezasa ensure data security and privacy with the chatbot?

A6: Nezasa prioritizes data security and privacy by focusing on internal and less sensitive data sources for the chatbot, such as public product knowledge bases and general wiki content. The chatbot is designed for internal use only, with strict access controls and compliance with data protection regulations to prevent data leakage.