• Data Talent Pulse
  • Posts
  • ChatGPT Business Subscription announced + Exploring implementation options

ChatGPT Business Subscription announced + Exploring implementation options

DTP #2

OpenAI says that they are working on a business subscription plan for ChatGPT, with the objective of giving professionals and enterprises more control over their data.

“ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” OpenAI wrote in a blog post published last week.

The business subscription is set to be launched in the upcoming months. Along with ChatGPT Business, OpenAI will now allow ChatGPT users to turn off their chat history, with conversations no longer being used to train the platform's language models.

img: OpenAI

Capping it off, the startup announced an export option for chat data, letting you email yourself a copy of information stored. This will not only allow you to move your data elsewhere, but it can also help users understand what information it keeps, says OpenAI.

With the business subscription plan incoming, now is a good time to look at a few ways in which enterprises can set up the infrastructure for implementing ChatGPT into their workflows.

Give us your thoughts on this issue:

Implementing ChatGPT for businesses

In a recent survey to US-based respondents, 49% of participant companies stated that they are already using ChatGPT. 93% of the companies said that they are looking to expand the use of ChatGPT-based chatbots.

Source: resumebuilder.com survey

This is not an easy task. The expansion of the use of GPT, fully integrating it into business workflows would require businesses to go beyond the basic ChatGPT platform.

In terms of adoption and scaling, this means enterprises are looking at:

  • Identifying and building custom GPT models for their use cases.

  • Finding experts who can work with these GPT models.

The basic approach to utilizing ChatGPT-4 that one might find plentiful examples of, involves using the platform out of the box. Our last issue touched on this and took a cursory look at the value prompt engineers can add to businesses.

The problem with this approach is that there is limited control over the output that businesses can get, and no access to APIs. This is a deal breaker for businesses that want to scale the use of ChatGPT.

The other approach is building a tailored GPT-4 model. Enterprises can do this, with the help of specialists, by adding data and tuning parameters of the GPT model or dataset.

Note: The ChatGPT service's base GPT model cannot be altered directly. However, businesses have the option of obtaining the base GPT-4 model and making modifications to it independently for use in a chatbot engine, without utilizing the ChatGPT application. The resultant GPT model would be utilized similarly to other large language models.

This approach requires significant expertise, data curation and funding.

Understandably, due to limited resources, small to medium scale organizations may be unable to develop their own language models.

Furthermore, there are potential risks associated with closed-source pay-per-use models which may dissuade many businesses from pursuing this option.

Instead, they may choose to use options like smaller open-source language models:

Even libraries provided by companies such as Hugging Face may be appealing. These models can be "fine-tuned" to adapt to specific internal data, resulting in impressive business value despite lacking the broader data sets of base GPT.

Several advantages come with these customized open-source models:

  1. Output that is more specific and relevant to the organization due to a models' "few-shot learning" ability, which means the model can learn a domain with just a few labelled examples.

  2. Controlled costs for running the model, eliminating exposure to changes in API pricing from for-profit suppliers.

  3. Improved control over moderation over output and Increased data confidentiality as all data stays within the organization's firewall.

Although these models may not possess the broad capabilities of a general-purpose LLM like GPT-4, many of those capabilities are irrelevant for targeted enterprise applications. For example, most business use cases wouldn't require that an LLM provide an output in the form of poetry, or a list of the warmest countries in descending order.

Specialized expertise may be necessary during the initial setup phase, but these models can subsequently be implemented throughout the business, benefiting almost all business units.

Thus, creating the necessary infrastructure to enable such reuse is an important prerequisite to implementing this type of model, and scaling for business use cases.

A quick summary of what we discussed above:

Organizations have a few options when it comes to implementing GPT before they scale:

  • High setup cost; Optimized for use cases: open-source models that are built in-house with the help of experts.

  • Compromise of effort and optimization: Using models as a service from third parties. (closed or open-source)

  • Quick to setup; No optimization: Using GPT out of the box.

GPT-4 is here (with more updates to come in the future) and it has the potential to be a game-changer for organizations looking to leverage AI.

However, it's important to remember that implementing such a powerful tool can be a process, and organizations need to be prepared to fully take advantage of all it has to offer.

With the right approach and preparation, GPT-4 could have a significant positive impact on your business.

See you next week,

Mukundan

Do you have a unique perspective on developing and managing data science and AI talent? We want to hear from you! Reach out to us by replying to this email.

Reply

or to participate.