• Data Talent Pulse
  • Posts
  • 👩‍💻 Evolution of Prompt Engineering, AI Adoption Challenges

👩‍💻 Evolution of Prompt Engineering, AI Adoption Challenges

DTP #51: Q&A with Tim Miner

This week: 

Q&A: We spoke to Tim Miner, Founder and CEO of By The People Technologies, on how the expectations around prompt engineering have evolved over time and its significance in the AI landscape. 

AI in Business: Google's search engine is shifting to AI-driven results with "AI Overviews," significantly altering internet interactions, despite challenges with accuracy and impacts on users, publishers, and privacy. 

Ethical AI in SEO involves using AI transparently, ensuring accuracy, respecting privacy, and promoting original, useful content while adhering to search engine guidelines and prioritizing user experience. 

Read below: 

💼 AI in Business 

Google's AI Overviews set to change the Internet

Google's core product, the search engine, is transitioning towards AI-driven results, with AI-generated answers, termed "AI Overviews," set to appear at the top of search results. 

Implications of AI in Search 

  • This shift is significant as the search engine is a primary interface for internet interaction. 

  • The new AI-driven approach is seen as a major change in Google's role and the web's economy. 

Testing and User Experience 

  • Despite testing, AI-generated results are not always accurate. 

  • The impact on users and publishers, who rely on Google for traffic, remains uncertain. 

Competition and AI Commitment 

  • Google's AI updates aim to fend off competition from companies like OpenAI. 

  • Google showcased various AI advancements at its developer summit, Google I/O. 

New AI Features 

  • Introduction of new image, audio, and video generation tools. 

  • Upgraded voice assistants and tools to manage documents, meetings, and inboxes. 

  • Real-time language scanning for scam detection in phone calls.  

Google's AI ambitions reflect an ongoing need for user data to improve AI capabilities. The trade-off between enhanced AI services and user privacy remains a central issue. 

Q&A - Evolution of Prompt Engineering

How would you say the expectations of what a prompt engineer is has changed between when it was first introduced and now? 

Good question. So, first of all, let me tell you what we do. I am the founder of By The People Technologies. We are a system integrator for AI systems. We consider ourselves AI strategists, so we help small and medium-sized enterprises find where they should implement AI systems in their products or services to become more competitive. Just as simple as that. 

In the process of doing that, we have engagements to get into these companies and get to know these companies that are interested in implementing AI. One of those is a webinar, and one of those is a workshop. We do in house workshops to help people learn how to do prompt engineering and how prompt engineering can help them think at a higher level and get to a higher level of engagement. 

“Prompt engineering, I think, has drastically changed as we as humans have adapted to the generative AI products that have come to us at the beginning.

Many of the prompts or the prompt engineering nature was around: “How do I get better text out of my generative AI output” and as new large language models came on to the market, there were suddenly differences. 

I think one of the first things everybody noticed when Anthropic’s Claude came on the market, is they had a larger context window and therefore could deliver on much more creative prompts - ex. you could ask for it to write a complete story. You could ask it for many other connections. Things that within a smaller context window just got lost, especially with GPT 3.5, right? Just wasn't as effective at that as Claude was. 

I think prompt engineering at the beginning was certainly focused on how one could get better text out. I took it from a different perspective and we usually take it from a different perspective in the classes we give. Helping people understand that generative AI by nature is generative. It formulates every time you prompt it, even the same query. 

We're all clear on using desktop applications that have helped us automate certain tasks, and we expect that when we input a certain thing, we get a certain thing back out. The problem is that generative AI just doesn't work that way. Your prompting is more on a relative basis. 

You get different results each time because of the generative nature of the system. This leads back to discussions with some of the architects of OpenAI and other systems; they often don't fully understand how or why it works. They've built this massive capacity to absorb information and then equipped it with language. 

They provided it with the language found all over the internet and said, 'Now go, learn from this language, understand how things are expressed.' In Australia, some individuals discuss prompt engineering in a way that emphasizes its necessity, citing scenarios where they achieve remarkable results using generative AI products. 

When you ask why it works so well, they admit they have no idea. It's as if the system has taken on a life of its own, learning through language. 

Language, as humans use it, shapes our thoughts. So when you give the AI language, you're essentially teaching it how to think. Each time you interact with it, even using the same words and prompts, you'll get different results based on context. 

This concept can be challenging for students to grasp because they expect computer programs to provide consistent outputs. Our goal in refining prompts isn't to ensure consistent results every time, but rather to meet our expectations. 

In my experience, about 70-60% of people encounter frustrations with generative AI. They try different prompts but don't get the desired outcomes. They expect the AI to understand complex requests, like providing accurate product descriptions, but it often falls short. 

If we can input more context and specify our goals, we can move closer to obtaining the desired output from the AI. 

So you don't necessarily need it to give you the same answer every time, but to meet the expectation you have every time. 

I would put it this way: if you need it to give you the same answer every time, which you need, let's take another extreme from the other side—an accounting application. This accounting application calculates the EBITA of the company every month. The same answer, right? You need it every time. If those are the numbers that we derived this month, you need to have it be exactly the same thing. 

Generative AI is completely on the opposite end of the spectrum, as our language is. 
 
So the more through prompt engineering we can teach people to address the generative AI product as another human, the more we're getting them closer to using it effectively in the work that they're trying to do. 

There are a lot of large language models out there. Do you see that prompt engineering differs based on which model that you're trying to work with? 

It's mildly nuanced. When comparing creative outputs, like short stories, Claude from Anthropic tends to be better than ChatGPT. If you put their outputs side by side, you might notice that Claude shows more language expertise in the way it expressed itself. [However, this isn't always consistent.] 

I appreciate the various benchmarks people have developed, but I have to say, those benchmarks don't really apply to most white-collar workers who need generative AI to improve their job performance. The benchmarks often don't align with their ways of working.

“I consider prompt engineering one of those skill sets that is going to be required in everybody's job. “

People that are supposed to be salesman, accountants, marketeers, product managers. They will all need to learn how to do prompt engineering. It's going to be a basic skill of the future. If you put it in now: “Hey, by the way, I can do this same job but using the skill set of prompt engineering I can do it better.” I think you're going to float on the top of the resumes, looking for your job right now. 

On your work with small and medium sized businesses, what are the barriers to entry for these businesses when it comes to adopting AI? 

I think there are two main preconceived ideas: first, that it's everywhere, so people assume they already know about it. For example, I hear a lot of people say, 'My son uses ChatGPT to do his homework, I helped him with it, so I know what it is. I don't need to take a webinar or have someone teach my company how to use it. It's free and everyone can use it.’ 

These assumptions create barriers to really getting into AI. I try to explain that what companies need is a culture of AI. Everyone, even the receptionist and the secretaries, should be using AI to figure out what can be augmented and automated. This is the only way to stay competitive because, one day, your competitor will be operating at twice the speed and half the cost, and it will take you six months to catch up—an extremely costly delay. 

The second misconception is that implementing AI requires perfect data and a huge budget, often assumed to be at least a couple of million dollars. In reality, you can take small, trial steps—proof of concept (POC) steps—to start integrating AI. Through these steps, you learn the best ways to implement AI in your company. Only you know your daily work processes, so only you can determine how AI can best serve your needs. Consultants, regardless of their expertise or cost, can't dictate the right AI processes for you.  

When companies realize that implementing AI isn't as expensive or complicated as they thought, they are on the road to improvement. 

🌐 From the Web 

Researchers at Anthropic have identified features in AI models that can be adjusted to alter behavior, potentially improving control and addressing risks like bias and safety concerns. 

The EU's AI Act, effective next month, sets strict regulations on AI use, emphasizing transparency and accountability, and will have global implications for companies using EU customer data. 

Generative AI is highly energy-intensive, causing data center electricity use to double by 2026. Efficiency improvements are critical as demand and environmental impact grow. 

🏳️Ethical AI

AI Ethics in SEO 

AI enhances productivity for brands and marketers but is imperfect and can produce inaccurate or biased information. Ethical AI involves transparency, fairness, responsibility, respecting user privacy, and ensuring information accuracy. Some best practices: 

  • Disclose AI usage in content creation and SEO processes to clients and stakeholders. 

  • Use a human-led approach for long-form content creation, with AI assisting in brainstorming and organizing. 

  • Check AI-generated content for originality and trustworthiness, using tools like Copyscape. 

  • Google emphasizes people-first content; avoid relying solely on AI due to potential inadequacies in originality and training data. 

  • Follow ethical standards and search engine guidelines, avoiding practices like keyword stuffing or cloaking. 

  • Ensure AI usage in SEO promotes trustworthy and useful content that benefits users. 

🤖 Prompt of the week 

Act as a data scientist and code. I have a time series dataset of [describe dataset]. Please perform a time series decomposition and plot the components. 

See you next week, 

Mukundan 

The Data Talent Pulse is brought to you by TeamEpic, a trusted global AI Talent provider. Learn more about us here. 

Reply

or to participate.