Generative AI Use Is On The Rise and So Are Concerns About Bias, Privacy and More
October 25, 2023

Rob Mason
Applause

The popularity of generative AI technology has skyrocketed in 2023, and that trend is likely to continue. Fortune Business Insights forecasts global market growth of generative AI technology to increase from USD 43.87 billion in 2023 to USD 667.96 billion by 2030. To gain insights into user experiences with generative AI services, my organization, Applause, surveyed more than 3,000 digital quality testing professionals across the globe. Here's what our survey uncovered.

Generative AI Use Is Growing

The majority of digital quality testing professionals are using generative AI; specifically:

■ 59% said their workplace allows for generative AI use.

■ 79% said they are actively using a generative AI service.

Reasons cited for using generative AI include writing software, doing research, and for more creative pursuits like writing song lyrics and creating electronic music.

Concerns About Bias Are Growing, Too

Despite the increased usage of generative AI, concerns around the technology are growing. When we asked testers about their concerns about bias, these worries became evident:

■ 90% of respondents expressed concern over bias affecting the accuracy, relevance, or tone of AI-generated content, a 24% increase from Applause's AI and Voice Applications Survey from March of this year.

■ 47% shared they've experienced responses or content they considered to be biased.

■ 18% said they had received offensive responses.

Concerns About AI Hallucinations

Hallucinations in AI is another area of concern, with our survey finding:

■ 37% of respondents have seen examples of hallucinations in AI responses.

■ 88% of respondents expressed concern about the technology for software development because of the potential for hallucinations.

Data Quality Concerns

The survey queried digital quality testing professionals about the quality of data being used to train AI algorithms:

■ 91% expressed some level of concern with data quality.

■ 22% expressed extreme concern.

Concerns Around Data Privacy and Copyright Infringement

We also asked if data privacy and copyright infringement were worries related to generative AI. Our survey found:

■ 98% of respondents said data privacy needs to be considered when developing new technologies.

■ 67% said they feel that most generative AI services infringe on data privacy.

When asked about the level of concern that content produced using generative AI that may be in breach of copyright or intellectual property protections, respondents shared:

■ 21% are extremely concerned

■ 41% are concerned

■ 29% are slightly concerned

■ 9% are not concerned

Testing With Real People

As the excitement, adoption, and use cases around leveraging generative AI continue to advance, there is no denying the potential and possibilities around this technology. However, there are still some concerns that need to be addressed and overcome, including bias, hallucinations and harmful content. Utilizing teams of real people to test the technology and gaining their insights into the quality of the content produced — in terms of correctness, subtleties of bias, and relevance — will help improve AI performance over time. Meanwhile, we must also ensure the data these algorithms are being trained on is high quality and does not violate privacy or copyright laws.

Rob Mason is CTO of Applause
Share this

Industry News

May 08, 2024

MacStadium announced that it has obtained Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1, meaning that MacStadium has publicly documented its compliance with CSA’s Cloud Controls Matrix (CCM), and that it joined the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment.

May 08, 2024

The Cloud Native Computing Foundation® (CNCF®) released the two-day schedule for CloudNativeSecurityCon North America 2024 happening in Seattle, Washington from June 26-27, 2024.

May 08, 2024

Sumo Logic announced new AI and security analytics capabilities that allow security and development teams to align around a single source of truth and collect and act on data insights more quickly.

May 08, 2024

Red Hat is announcing an optional additional 12-month EUS term for OpenShift 4.14 and subsequent even-numbered Red Hat OpenShift releases in the 4.x series.

May 08, 2024

HAProxy Technologies announced the launch of HAProxy Enterprise 2.9.

May 08, 2024

ArmorCode announced the general availability of AI Correlation in the ArmorCode ASPM Platform.

May 08, 2024

Octopus Deploy launched new features to help simplify Kubernetes CD at scale for enterprises.

May 08, 2024

Cequence announced multiple ML-powered advancements to its Unified API Protection (UAP) platform.

May 07, 2024

Oracle announced plans for Oracle Code Assist, an AI code companion, to help developers boost velocity and enhance code consistency.

May 07, 2024

New Relic launched Secure Developer Alliance.

May 07, 2024

Dynatrace is enhancing its platform with new Kubernetes Security Posture Management (KSPM) capabilities for observability-driven security, configuration, and compliance monitoring.

May 07, 2024

Red Hat announced advances in Red Hat OpenShift AI, an open hybrid artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across hybrid clouds.

May 07, 2024

ServiceNow is introducing new capabilities to help teams create apps and scale workflows faster on the Now Platform and to boost developer and admin productivity.

May 06, 2024

Red Hat and Oracle announced the general availability of Red Hat OpenShift on Oracle Cloud Infrastructure (OCI) Compute Virtual Machines (VMs).

May 06, 2024

The Software Engineering Institute at Carnegie Mellon University announced the release of a tool to give a comprehensive visualization of the complete DevSecOps pipeline.