AI adoption risk factors, Top 3 traits in data scientists

DTP #6

80% of small business owners said they’re concerned about privacy, ethical and intellectual property issues associated with AI

These are important, and valid concerns.

This week we’re taking a look at what established companies that are adopting AI (like Meta and EventBrite) are saying about potential associated risks in their quarterly reports.

Plus we highlight some insights from Ya Xu, Head of Data at LinkedIn, on the top three traits to look for when hiring data science talent.

Read below.

👩🏻‍💻 How are industries adopting AI?

  • How Zendesk is Reshaping Customer Service with Gen AI: Zendesk, a Software-as-a-Service company, is utilizing generative AI to enhance customer experiences. They have introduced generative AI for various use cases such as summarization and expanding replies. However, Zendesk remains cautious about adopting generative AI due to potential risks and is focused on establishing trust and addressing associated gaps.

  • Industrial AI: How Is Artificial Intelligence Transforming the Manufacturing Industry? Highlights from the article: GE offers an autonomous robotics system utilizing AI technologies to improve productivity in industrial settings. Intel uses AI to enable real-time data generation for fine-tuning workflows in the manufacturing industry. NVIDIA provides the IGX Orin platform, offering industrial inspection, predictive maintenance, and robotics solutions for edge AI applications.

Give us your thoughts on this issue:

What are businesses saying about AI risks?

Companies have been determining how to approach the adoption of AI intelligently. While embracing AI, a few have started acknowledging its ramifications in their recent 10-K and 10-Q filings (although this is still a minority among the companies listed in major indices.) In their reports, some companies have included separate risk factors dedicated to AI, for example:

We may not be successful in our artificial intelligence initiatives, which could adversely affect our business, reputation, or financial results.

“We are making significant investments in artificial intelligence (AI) initiatives, including to recommend relevant unconnected content across our products, enhance our advertising tools, and develop new product features using generative AI. In particular, we expect our AI initiatives will require increased investment in infrastructure and headcount. AI technologies are complex and rapidly evolving, and we face significant competition from other companies as well as an evolving regulatory landscape. These efforts, including the introduction of new products or changes to existing products, may result in new or enhanced governmental or regulatory scrutiny, litigation, ethical concerns, or other complications that could adversely affect our business, reputation, or financial results. For example, the use of datasets to develop AI models, the content generated by AI systems, or the application of AI systems may be found to be insufficient, offensive, biased, or harmful, or violate current or future laws and regulations. In addition, market acceptance of AI technologies is uncertain, and we may be unsuccessful in our product development efforts. Any of these factors could adversely affect our business, reputation, or financial results.”

Eventbrite (event management and ticketing platform):

We are incorporating generative artificial intelligence, or AI, into some of our products. This technology is new and developing and may present operational and reputational risks.

“We have incorporated a number of third-party generative AI features into our products. This technology, which is a new and emerging technology that is in its early stages of commercial use, presents a number of risks inherent in its use. AI algorithms are based on machine learning and predictive analytics, which can create accuracy issues, unintended biases and discriminatory outcomes. We have implemented measures, such as in-product disclosures, which inform creators when content is created for them by generative AI and that they are responsible for the accuracy and editorial review of their content. There is a risk that third-party generative AI algorithms could produce inaccurate or misleading content or other discriminatory or unexpected results or behaviors (e.g., AI hallucinatory behavior that can generate irrelevant, nonsensical or factually incorrect results) that could harm our reputation, business or customers. In addition, the use of AI involves significant technical complexity and requires specialized expertise. Any disruption or failure in our AI systems or infrastructure could result in delays or errors in our operations, which could harm our business and financial results.”

Biogen (biotechnology company):

The increasing use of social media platforms and artificial intelligence based software presents new risks and challenges.

“Social media is increasingly being used to communicate about our products and the diseases our therapies are designed to treat. Social media practices in the bio pharmaceutical industry continue to evolve and regulations relating to such use are not always clear and create uncertainty and risk of noncompliance with regulations applicable to our business. For example, patients may use social media channels to comment on the effectiveness of a product or to report an alleged adverse event. When such disclosures occur, there is a risk that we fail to monitor and comply with applicable adverse event reporting obligations or we may not be able to defend the company or the public's legitimate interests in the face of the political and market pressures generated by social media due to restrictions on what we may say about our products. There is also a risk of inappropriate disclosure of sensitive information or negative or inaccurate posts or comments about us on social media. We may also encounter criticism on social media regarding our company, management, product candidates or products.”

Forbes recently identified the top 5 risks of generative AI that business leaders should watch out for:

- Risk of disruption

- Cybersecurity risk

- Reputational risk

- Legal risk

- Operational risk

Some companies have included these risks as a number of factors within broader disclosures, for example:

  • There may be challenges in attracting and retaining employees with AI expertise and competing for talent using AI tools.

  • The use of AI-based software by employees, vendors, suppliers, contractors, consultants, or third parties may lead to the potential release of confidential or proprietary information.

  • There is a risk of potential failures in incorporating AI into business systems, such as bugs, vulnerabilities, or algorithmic flaws that may not be easily detectable.

  • Competition is increasing with the introduction of new technologies, including AI, which can render a company's products or services obsolete.

  • Insufficient or biased data, unintentional bias or discrimination through the use of AI, or unauthorized use of AI tools can lead to potential legal or reputational harm. Negative publicity or public perception of AI can also be a concern.

  • Uncertainties in case law and regulations regarding intellectual property ownership and license rights of AI output can create risks in adequately protecting intellectual property underlying AI systems and software. Inadvertent infringement is also a concern.

Bonus - 3 top traits for data science talent

During an interview with VentureBeat, Ya Xu, LinkedIn's VP of Engineering and Head of Data and Artificial Intelligence (AI), outlined the top three qualities to seek in data science professionals when hiring.

Firstly, Xu emphasized the importance of finding individuals who are driven by a sense of purpose and are motivated by making a real impact. She stated that while different approaches may be employed, the ultimate goal should be to benefit LinkedIn's members and customers.

Secondly, Xu highlighted the significance of hiring collaborative individuals. She sought individuals who genuinely care for their colleagues and respect diverse skill sets. Xu advised against recruiting individuals who possess an attitude of superiority, thinking they are the smartest and dismissing the opinions of others.

Lastly, Xu expressed the need for individuals who possess a willingness to learn, adapt, and maintain a sense of curiosity. She acknowledged that nobody can claim to know everything in the rapidly evolving field of data science. Even with her own Ph.D. in machine learning statistics from 10 years ago, Xu recognized the significant advancements that have taken place and emphasized the importance of remaining open-minded and eager to learn.

See you next week.

Do you have a unique perspective on developing and managing data science and AI talent? We want to hear from you! Reach out to us by replying to this email.

Join the conversation

or to participate.