PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023

Are ChatGPT and Generative AI a blessing or a curse for security teams? While artificial intelligence (AI)’s ability to create malicious code and phishing emails presents new challenges for organizations, it has also provided new tools for everything from threat detection and remediation guidance to securing Kubernetes and cloud environments. It has also opened the door to many cases of defense use.

Recently, VentureBeat caught up with some of PWC’s top analysts, who shared their thoughts on how creative AI and tools like ChatGPT will impact the threat landscape and what use cases emerge for defenders. will

>> Follow VentureBeat’s ongoing Generative AI coverage<<

Want to read news directly to your inbox?
Sign up for VB Daily.
Overall, analysts were optimistic that defensive use cases would grow to counter the malicious use of AI over the long term. Predictions about how creative AI will impact cybersecurity in the future include:

Malicious AI use
Need to protect AI training and output
Setting creative AI usage policies
Modernizing Security Auditing
Greater focus on data hygiene and assessing bias
Keeping up with growing threats and mastering the basics
Creating new jobs and responsibilities
Leveraging AI to Optimize Cyber Investments
Enhancing threat intelligence
Risk prevention and compliance risk management
Implementing a digital trust strategy
Below is an edited transcript of their responses.

1. Malicious AI Use
“We’re at an inflection point when it comes to how we can leverage AI, and this paradigm shift affects everyone and everything. Big things can happen when AI is in the hands of citizens and consumers. are

“At the same time, it can be used by malicious threat actors for nefarious purposes, such as malware and sophisticated phishing emails.

“Given the many unknowns about the future capabilities and capabilities of AI, it is critical that organizations develop robust processes to build resilience against cyber-attacks.

“There is also a need for regulation under social values that mandate the ethical use of this technology. In the meantime, we need to be smart users of this tool, and consider how to minimize risks. What safeguards are needed for AI to deliver maximum value.”

Sean Joyce, Global Cyber Security and Privacy Leader, US Cyber, Risk and Regulatory Leader, PwC US

2. Need to protect AI training and output
“Now that creative AI has reached a point where it can help companies transform their businesses, it’s important for leaders to work with firms with a deep understanding of How to navigate increased security and privacy concerns.

“The reason is twofold. First, companies must protect how they train AI as they gain from fine-tuning the models of how they operate their businesses, better products and Deliver services, and engage with our employees, customers and ecosystem.

“Second, companies must also protect the signals and responses they receive from creative AI solutions, as they reflect what the company’s users and employees are doing with the technology.”

Muhammad Kande, Vice Chair – US Consulting Solutions Co-Leader and Global Advisory Leader, PwC US

3. Setting creative AI usage policies
“Many interesting business use cases emerge when you consider how you can further train generative AI models with your content, documents, and assets so that they are relevant to your business, in your context. Can build on unique capabilities Thus, a business can leverage creative AI with its own unique IP and knowledge-based ways of working.

“This is where security and privacy become important. For a business, the ways you instruct creative AI to produce content should be private to your business. Fortunately, most creatives AI platforms have considered this from the start and are designed to enable prompts, output and fine-tuning content security and privacy.

“However, all users now understand this. Therefore, it is important for any business to establish policies for the use of generative AI to protect confidential and private data from entering public systems, and within their business. Creating a safe and secure environment for creative AI.”

Brett Greenstein, Partner, Data, Analytics & AI, PwC U.S.

4. Modernizing security auditing
“There are amazing possibilities for using generative AI to drive innovation in audit! Sophisticated generative AI has the ability to generate responses that take into account specific situations while written in simple, easy-to-understand language.

“What this technology offers is a single point of access to information and guidance while also supporting document automation and analyzing data to answer specific questions – and it’s efficient. It’s a win-win.”

“It’s not hard to see how a capability like this can provide a significantly better experience for our people. Plus, a better experience for our people translates to a better experience for our customers.”

Kathryn Kaminsky, Vice Chair – Co-Leader of US Trust Solutions

5. Greater focus on data hygiene and assessing bias
“Any data input into an AI system is vulnerable to potential theft or misuse. To begin with, identifying the appropriate data to input into the system can reduce the risk of losing confidential and private information in an attack. I will help.

“Also, it is important to develop detailed and targeted prompts to collect the appropriate data that is fed into the system, so that you can get more valuable results.

“Once you have your outputs, review them with a fine-tooth comb for any inherent biases within the system. For this process, hire professionals to help you assess any biases. Engage a diverse team of

“Unlike coded or scripted solutions, generative AI is based on models that are trained, and therefore the answers they provide are not 100% predictable. The most reliable output from generative AI This requires collaboration between the technology behind the scenes and the people who benefit from it.

Jackie Wagner, Principal, Cyber Security, Risk & Regulatory, PwC U.S.

6. Keeping up with expanding risks and mastering the basics
Now that generative AI is reaching wide-scale adoption, implementing robust security measures is a must to protect against threat actors. The capabilities of this technology make it possible for cybercriminals to create deep fakes and execute malware and ransomware attacks more easily, and companies need to prepare for these challenges.

“The most effective cybermeasures continue to receive the least focus: By keeping up with basic cyberhygiene and condensing sprawling legacy systems, companies can reduce the attack surface for cybercriminals.

“Consolidating operating environments can reduce costs, allowing companies to maximize efficiencies and focus on improving their cybersecurity measures.”

Joe Nocera, PwC partner leader, cyber, risk and regulatory marketing

7. Creating new jobs and responsibilities
“Overall, I’d suggest companies consider embracing generative AI instead of creating firewalls and resisting — but with the appropriate safeguards and risk mitigations in place. Generative AI has some really interesting potential for how work gets done; it can actually help to free up time for human analysis and creativity.

“The emergence of generative AI could potentially lead to new jobs and responsibilities related to the technology itself — and creates a responsibility for making sure AI is being used ethically and responsibly.

“It also will require employees who utilize this information to develop a new skill — being able to assess and identify whether the content created is accurate.

“Much like how a calculator is used for doing simple math-related tasks, there are still many human skills that will need to be applied in the day-to-day use of generative AI, such as critical thinking and customization for purpose — in order to unlock the full power of generative AI.

“So, while on the surface it may seem to pose a threat in its ability to automate manual tasks, it can also unlock creativity and provide assistance, upskilling and treating opportunities to help people excel in their jobs.”

Julia Lamm, workforce strategy partner, PwC U.S.

8. Leveraging AI to optimize cyber investments
“Even amidst economic uncertainty, companies aren’t actively looking to reduce cybersecurity spend in 2023; However, CISOs must be economical with their investment decisions.

“They are facing pressure to do more with less, leading them to invest in technology that replaces overly manual risk prevention and mitigation processes with automated alternatives.

“While generative AI is not perfect, it is very fast, productive and consistent, with rapidly improving skills. By implementing the right risk technology — such as machine learning mechanisms designed for greater risk coverage and detection — organizations can save money, time and headcount, and are better able to navigate and withstand any uncertainty that lies ahead.”

Elizabeth McNichol, enterprise technology solutions leader, cyber, risk and regulatory, PwC U.S.

9. Enhance threat intelligence
“While companies releasing generative AI capabilities focus on safeguards to prevent the creation and distribution of malware, misinformation or disinformation, we need to assume that generative AI is a bad actor for these purposes. will use and stay ahead of these considerations.

“In 2023, we fully expect to see further growth in threat intelligence and other defense capabilities to leverage generative AI for good. For example, making real-time inferences about access to systems and information with greater confidence than currently deployed access and identity models.

“It is certain that creative AI will have a far-reaching impact on the way every industry and company within it operates. PwC believes that these collective advances will continue, powered by human leadership and technology, with 2023 The most rapid developments will be seen that will set the direction of the coming decades.

Matt Hobbs, Microsoft Practice Leader, PwC U.S.

10. Risk Prevention and Compliance Risk Management
“As the threat landscape continues to evolve, the healthcare sector – an industry rich in personal information – continues to find itself in the crosshairs of risk actors.

“Healthcare industry executives are increasing their cyber budgets and investing in automation technologies that can not only help prevent cyber attacks, but also manage compliance risks, protect patients and staff. can better protect data, reduce healthcare costs, eliminate process failures and more.

“As creative AI continues to evolve, so do the associated risks and opportunities to secure healthcare systems, highlighting the importance of embracing this new technology for the healthcare industry. while also enhancing our cyber defenses and resilience.”

Tiffany Gallagher, Health Industries Risk and Regulatory Leader, PwC U.S.

11. Implementing a digital trust strategy
“The pace of technological innovation, such as generative AI, combined with an evolving patchwork of regulation and erosion of trust in institutions requires a more strategic approach.

“By implementing a digital trust strategy, organizations can better align traditionally siled functions such as cybersecurity, privacy and data governance, enabling them to anticipate risks and drive value to the business. Can also be opened.

“At its core, a digital trust framework addresses solutions above and beyond compliance – instead prioritizing the exchange of trust and value between organizations and consumers.”

Toby Sperry, Principal, Data Risk & Privacy, PwC U.S.

Leave a Comment