Build-your-metric

How Generative Al Impacts the Technical Content Industry

Key Predictions

WHITE PAPER

BY ZOOMIN
MARCH, 2023

Table of Contents

Background

Recent introduction of Generative Al technologies, and in particular Large Language Models (LLMs) and ChatGPT (as the most famous example), are introducing a major potential disruption to the technical content industry and to the ways in which such content is produced and consumed today. For the first time, LLMs are no longer perceived as expensive, high-end luxury technology that requires fleets of data scientists to tune and train, but are slowly being seen as affordable, low-maintenance commoditized infrastructure being routinely used by consumers.

The ability to quickly produce and analyze technical content in mass, at a high degree of accuracy and consistency, and from a wide range of inputs, is a huge promise. Content can be generated quicker and with less manual labor, leading to significant cost savings for organizations and reduced customer effort. Thus, resulting in happier and more productive content consumers. We believe that fast and smart implementation of LLMs throughout the technical content cycle will be vital for surviving the paradigm shift in content creation and delivery.

In mainstream research, content production and consumption are consistently mentioned among the top five areas that will be disrupted by these new technologies. We are already witnessing this shift in other content-intensive areas, such as content marketing and in the media industry. Surveys we have issued have shown that the TechComm industry largely understands the ramifications and has already prepared itself to embrace the coming change. When asked if these new technologies would result in a major disruption to the industry, 59% of respondents expected a major disruption and only 12% of respondents considered this a temporary buzz. Furthermore, when asked about how far they are in the evaluation and in the adoption of those technologies, 37%  of respondents were already in some form of an early pilot or rollout processes, while the others said they were Go to Sactively studying the art of the possible ahead of making decisions.

This high-level review is meant to outline the major predictions we have about how our industry will be affected by this upcoming and trending technology. Enjoy the read.

The Enterprise Technical Content Workflow

Enterprise content is typically undergoing a similar workflow across the various industries. This workflow consists of the following four major stages:

  1. Authoring: the act of compiling the content itself. This can be done by various people inside and outside of the customer's organization, to include technical writers, SMEs, and others.
  2. Management, Localization & Review: typical content management, including content storage, backups, version control, integrity checking, and others. In addition, this includes activities such as: translating the content to various international audiences, running internal content reviews that validate the accuracy of the content, and auditing the content to evaluate its alignment to the company content standards (e.g. tone of voice, glossary, style).
  3. Publishing & Consumption: the act of delivering the content to the target audiences (typically customers, prospects, partners and employees) at their point of need for them to self-serve.
  4. Measurement: the act of assessing who is actually consuming the content, their level of satisfaction from the content, its delivery, their sentiment when reading the content, as well as driving specific iterative improvement cycles based on these insights.

In this white paper, we will present our top predictions of how Generative Al has the potential to disrupt all of the stages in this content workflow and reshape the content ecosystem as a whole.

Our Top Predictions

Prediction: New balance between SMEs and technical writers

Why?

Historically, the work of writing documentation was split between two primary roles the subject matter expert (SME) and the technical writer. The SME was responsible to be the domain expert, educate the technical writer, and approve the final deliverable. The technical writer was responsible to take the SME's knowledge and formalize it into a consistent, clear, engaging piece of content that fits well with the rest of the company's information t and content standards. Much of the reason why SMEs, despite their seniority, weren't given the full authority to generate formal customer-facing content was because companies lacked an objective method for assessing their writing quality and weren't sure if the SME had the global knowledge of how this specific deliverable would fit in with the rest of the existing content.

What's changed?

The introduction of Generative Al changes this balance. Even today, you can provide a ChatCPT-like bot a high- level overview of the knowledge itself and ask it to generate an extended, cohesive knowledge article that can be used as a scaffold providing the writer with about 80% of the task in a matter of minutes. The more advanced these bots become, the more they will leam your company's writing style guides, tone-of-voice, and preferences, and in addition, will leam from prior knowledge to teach SMEs "what great writing looks like for your organization, allowing SMEs to independently generate better deliverables. SMEs will no longer spend time training someone else (ie. the writer) on the domain and will also not need to review their work, thus leaving them time to generate better content. Finally, given the fact there are always more SMEs than writers, organizations can quickly scale up content production and generate larger quantities of content in a shorter period of time.

What does this mean for you?
  • Simplification and democratization of tools and technology stack as more SMEs enter the authoring space, they will each want to use their own favorite authoring tool. These tools will need to be very intuitive and simple in nature, as the SMEs are not professional writers and don't want (or need) the bells or whistles of professional authoring suites.
  • The need for content orchestration will remain, and may even get stronger-more authoring tools, sources and formats tightly synchronized into a single publishing pipeline.
  • Technical writers will need to spend more of their time being the "center-of-excellence, hence mentoring and supporting the SMEs content production. Thus, the writer becomes the SME of content creation, production, and publication.

Prediction: Structured content will undergo a transformation in order to stay relevant

Why?

XML-based structured content standards (e.g. DITA) offer a lot of power, but require an extensive learning curve, appropriate tooling, and skilled personnel to build and maintain content operations. Even today, some organizations find it to be too complex and are reverting or balancing it with simpler and cheaper - although less powerful- formats (e.g. Markdown) and all-in-one KB tools.

What's changed?

Over time and with the assistance of generative Al, the SMEs will be taking a primary role in the creation of the content. These SMEs are not full-time writers and lack the skills, capacity, and desire to adopt and use complex structured content standards and tools - they just don't see its value.

The need for structured content and self-contained chunks is not going to go away. But, the better automated text analysis technologies become, computers could extract the semantics and structure even from large blobs of text which are not written in an ideal way, thus adding the structure which the SME's tool cannot create.

However, structured content - if used correctly-can dramatically help with the definition of content semantics to make the content highly optimized for programmatic consumption via search, filtering and personalization. Standards committees will need to modify their standards accordingly to make sure they are able to provide optimal support based on what the Al models are requiring.

What does this mean for you?
  • If you are using structured content, explore deeper how you can follow content architecture best- practices and make sure you're providing optimized layers of metadata and semantics to your content.
  • You should be ready to embrace additional publication pipelines with non-structured content at its core.

Prediction: Consumption will shift from being search-based to being conversational-based

Why?

In the last 20 years, users have accepted Google as the "holy-grair experience for finding answers. To be more specific, users developed habits of writing optimized, concise, and minimalistic search queries, in order to get a collection of what would likely be the most useful links to potential results. In other words, users would not typically expect to get a direct authoritative answer, but instead to get references to places where they can research and find the answers on their own.

What's changed?

With LLMs getting traction and with Microsoft and Google embedding conversational experiences into their search engines (with Bing and Bard Al respectively), conversational language is becoming part of the new standard for getting answers. This shift changes the way customers expect to get answers: direct and conversational over clicking links, reading text, and summarizing on their own..

What does this mean for you?
  • User tolerance for browsing through a large amount of search results will decline.
  • As conversational Q&A flow is easy to embed in various touchpoints, we can assume that customers will want to use it in multiple touchpoints, including in-product-help, embedded chatbots, search, Google etc.
  • Customers will expect a high degree of relevancy and personalization. When they're searching for answers by typing in long-form conversational text they will compare Zoomin's results to what they would get with giants such as Microsoft and Google.
  • Various search features (e.g. clustering, personalization) will be expected to be part of the conversational experience, where relevant.
  • Analytics will have to change as well - as an industry, we all understand how to measure search effectiveness, but are largely inexperienced with measuring the effectiveness of conversational interactions.

Prediction: Accuracy will be challenging but it will not hamper long-term adoption

Why?

With the emergence and media coverage of ChatGPT and the fact that the overall quality of the answers provided are above expectations, some people may think that LLMs are ready for prime-time and can provide a magic solution for extracting complex answers from their content with super-high accuracy.

What's changed?

LLMs are just starting to reach the threshold of sufficient performance. In cases where the training set is not large enough or is mostly domain-specific and non-standard, it is possible that certain answers would be rejected by customers as being incorrect or just irrelevant. In extreme cases, this may cause customers to rely on false answers to operate complex products, and may lead to customer complaints, escalations, or even legal or punitive action.

Although technology is improving at a blazing speed and customer content consumption habits are changing dynamically, we still believe that with the right mitigations, companies can reach a balance that provides their customers an elevated level of customer experience and usability while mitigating the impact of potential risks.

What does this mean for you?
  • Users will expect to receive compensating controls and features which will help them validate content accuracy. For example, users will likely come to expect footnotes and references as part of the answer, so that they can explore deeper and/or verify the authenticity and correctness of the answer.
  • You will be expected to keep, at least in the foreseeable future, a mix of conversational interfaces powered by LLMs alongside with the more "traditional" search interfaces. Users will not want you to deprecate search just yet until sufficient trust in LLMs is built.
  • Your legal department (and, indirectly, your users) will expect clear and written disclaimers on the usage of LLMs and the risks involved.
  • You should map your ability to proceed based on your industry. We expect a major difference in the rate of adoption between highly regulated industries and non-regulated industries. Industries where risk of giving wrong answers does not typically translate into mission critical, potentially life-risking mistakes, will be more tolerant, whereas companies with a lot to lose will not take these risks until the technology is more mature.

Prediction: Content consolidation and governance will be a must-have for being Al-ready

Why?

Today, the generation of technical content is done in silos, by different teams, using different formats and tools. This is likely not going to change any time soon, and perhaps even become worse as the move to SME-generated content intensifies. Having said that, despite the silos, the market as a whole is still seeing the integration of siloed content as a nice-to-have and most companies do not have a real solution in place which provides an always up-to-date, consolidated source of truth.

What's changed?

LLMs are only as good as the content they are fed.. If content is siloed and inaccessible, Al will fail to deliver accurate and comprehensive answers and will not live up to its expectations. Companies will realize that they have two big problems to solve: (1) consolidation - getting all of their content into one place where it can be fed to the LLM and (2) governance - solving for security and privacy, to make sure sensitive information does not leak out and make its way into the wrong hands.

What does this mean for you?
  • Every company that wants to be ready for the Generative Al revolution will need to invest in integrating allos and putting together content consolidation solutions and governance infrastructure.

Prediction: The rise of "synthetic" answers, or content mashups

Why?

Today, different departments in companies are working on writing different content related to the same subject or Issue. You can have documentation topics, knowledge base articles, peer to peer discussions, training materials- all talking about the same subject, but written by various functions who hardly know about one another. Typical customer experience today is that these various materials are all "thrown at the user as they are surfaced in search results, and the customer would be faced with choosing the most applicable result which best matches their question or problem. Typically, they will be choosing individual pieces of content, without seeing the "aggregate nature of the information as could have been collated from all of the different content pieces which were written about the subject at hand.

What's changed?

LLMs can analyze all of these content pieces, and dynamically build a "Content mashup"-Lea new "synthetic" summary of the content made from fragments from the various content pieces rewritten as a cohesive and personalized answer. In other words, LLM are building new content that was never explicitly authored by anyone based on all available information, saving users the need to ensure they visited and read all individual content pieces one-by-one.

What does this mean for you?
  • The typical federated search experience as we know it today will be less important, as users will choose not to navigate between the various sources and instead will opt to get the processed output.
  • Content that was not heavily viewed in the past may suddenly become valuable because it links other content together allowing the LLM to create a more cohesive summary.
  • inherent overlaps, conflicts, or contradictions between various pieces of content that are addressing the same subject, might be exposed to users and will need to be identified and mitigated promptly to maintain user trust.

Prediction: Content review will heavily shift to be mostly centered around fact-checking

Why?

Content review today is a manual and tedious process. The review typically consists of a few components. Among others, it is being checked to ensure the content is accurate from a technical point of view, that it can be followed and is functional, that it is mechanically corect in terms of spelling, grammar, style and tone-of-voice and writing guidelines, and that it is pedagogically correct in terms of the information location compared to other information This process today involves both the SME and the professional writer.

What's changed?

LLMs will be able to do most of the stylistic editorial work on the content. Even today, they can largely enforce writing style guides and rewrite content to better fit the target style without significant reduction in accuracy.

Since a lot of the content will be written by SMEs that are also the domain experts, reviews are expected to be shorter and to focus more on the accuracy and completeness of the content, with the style validation being almost entirely automated.

What does this mean for you?
  • You will need to set a new review process and strategy that is between augmentation of Al tools (to boost productivity with proper mitigations to control the review accuracy and to align the various stakeholders.

Prediction: New forms of KPIs will replace many of the KPIs we know today

Why?

Technical content analytics is heavily influenced by web analytics today. Many KPIs such as bounce rate, session duration, click through rate, and others, are very popular and frequently tracked.

What's changed?

Conversational experiences change the user's experience and require new KPIs which are better suited to evaluate success. For example, the three KPIs outlined above might not even be relevant in a conversational context.

What does this mean for you?
  • You will need to determine a clear strategy for measuring impact of conversational content consumption.
  • You will need to regularly start following new indicators of accuracy, such as % of answers provided, distribution of customer feedback to answers, etc. Stay tuned to Zoomin's analytics updates as we provide canned analytics experience for conversational interactions.

Prediction: LLMs will assist in insights generation and communication

Why?

Today, analytics is mostly consumed manually by professional analysts or business functions who are relatively skilled with operating complex BI tools and dashboards. This means that analytics is underutilized, and regular insights about consumption of technical content and its impact is not regularly making its way to leadership at most companies.

What's changed?

Just like LLMs are currently writing economic updates, summaries, and trend briefs for stock analysts (as they're analyzing large amounts of market data), they will be able to analyze content consumption data and generate short briefs for leadership showing the general trends and opportunities for improvement.

What does this mean for you?
  • Over time, you will need to explore the use of automated insight generation from your content analytics and create a channel to distribute these outputs to your organization

Conclusion

In this white paper we discuss the impact of generative Al on technical writing and structured content, making predictions for the future. We suggest that SMEs will take on a larger role in content creation, which will lead to the democratization of tools and technology stacks, simplification of authoring tools and increased need for content orchestration. Structured content will undergo a transformation to stay relevant, with more automation of text analysis technologies and computers extracting the semantics and structure from large blobs of text.

This means companies using structured content should explore deeper how they can follow content architecture best-practices and embrace additional publication pipelines with non-structured content at its core. The consumption of content will shift from being search-based to being conversational-based, as conversational language becomes part of the new standard for getting answers. This shift changes the way customers expect to get answers and will require a high degree of relevancy and personalization.

To ensure that companies are ready to handle the impact of these changes, their next steps should include preparing their content to be compatible with GPT. Content readiness will be crucial in providing a seamless GPT experience, particularly for technical content.

This involves consolidating and governing their content to ensure that it is accurate, secure, and relevant. By doing so, companies can ensure that any user can effectively consume their content through a GPT solution.

Book a Meeting

Connect with us!

browserlinkedintwitterfacebook