Table of Contents
- The Evolution of Threat Defense – An Interview with Steve Povolny of Exabeam
- How AI-Augmented Threat Intelligence Solves Security Shortfalls
- Google’s medical LLM proves to increase in accuracy
- AI is Here: How Should CISOs Respond?
- Cynomi and InfoSystems Cyber Partner to Promote Cyber Resilience Analysis – Benzinga
- Forbes Technology Council: Why Large Language Models (LLMs) Alone Won’t Save Cybersecurity
- MLOps Market to Surpass USD 17,335 Million Value by 2030, Says P&S Intelligence
- University of Guelph receives provincial funding for new AI program
- Modular Agents Boost AI Learning, Enhancing Decision-Making and Adaptability
- Robot Dogs Can Now Talk Because Scientists Have Put ChatGPT In Them
- US SEC developing rules on AI ‘conflicts of interest’
- How FedEx Dataworks is using analytics, AI to fortify supply chains
- Plurilock Announces New AI-Driven Cloud Access Security Broker Technology for Generative AI Tool…
- Bеst Practicеs for Implеmеnting Machinе Lеarning in Organization
- Splunk unveils Splunk AI to ease security and observability through generative AI – USA News Hub
- The Power of AI: Artificial Intelligence and the M&A Process
- Factors.ai secures $3.6 million in pre-Series A funding led by Stellaris Venture Partners | The …
- Darktrace unveils AI-enabled capabilities for incident response
- 4 Methods to Increase Your Profits this Year with Analytics and AI – insideBIGDATA
- Avant Technologies, Inc. Appoints Danny Rittman as Chief Information Security Officer – Avant Te…
- FraudGPT AI Bot Authors Malicious Code, Phishing Emails, and More
- FraudGPT AI Bot Authors Malicious Code, Phishing Emails, and More
The Evolution of Threat Defense – An Interview with Steve Povolny of Exabeam
cyberinsiders
The Role of Behavior Profiling and Machine Learning in Detecting Threats
As Steve pointed out, “vulnerabilities are the weaknesses that attackers exploit, while IoCs are the signs that a vulnerability has been exploited.” Understanding these concepts is vital to crafting a robust cybersecurity strategy.
Link: https://www.cybersecurity-insiders.com/the-evolution-of-threat-defense-an-interview-with-steve-povolny-of-exabeam/
How AI-Augmented Threat Intelligence Solves Security Shortfalls
Robert Lemos
This article discusses how large-language-model (LLM) systems can help security-operations and threat-intelligence teams become more efficient and effective, but lack of experience with the systems is holding back many companies from adopting the technology.
Key takeaways:
Organizations should implement LLMs for solvable problems and evaluate the utility of LLMs in their environment.
To create a strong threat intelligence capability, organizations need data about relevant threats, the capability to process and standardize the data, and the ability to interpret how the data relates to security concerns.
AI can help bridge the gap between data and security concerns, but organizations should keep a human in the loop to ensure accuracy.
Counter arguments:
Relying on LLMs to produce coherent threat analysis can lead to potential “hallucinations” due to incorrect or missing data.
Organizations should use “prompt engineering” to ask questions in an optimized way to get the most accurate answers.
Link: https://www.darkreading.com/black-hat/ai-augmented-threat-intelligence-solves-security-shortfalls
Google’s medical LLM proves to increase in accuracy
admin
“Our hope is LLM systems such as Med-PaLM, that are designed for medical applications with safety as paramount, will democratize access to high-quality medical information, particularly in geographies with a limited number of medical professionals,” Vivek Natarajan, AI researcher at Google and one of the researchers in the study, said on
“And eventually, with further development, rigorous validation of safety and efficacy, we hope Med-PaLM will find broad uptake in direct care pathways-augmenting our clinicians, reducing their administrative burden, aid with clinical decision making, giving them more time to focus on patients and overall make healthcare more accessible, equitable, safer and humane.”
THE LARGER TREND
In March, the technology company’s Med-PaLM 2
tested on U.S.
Medical Licensing Examination-style questions, performing at an “expert” test-taker level with 85%+ accuracy.
Link: https://todayheadline.co/googles-medical-llm-proves-to-increase-in-accuracy/
AI is Here: How Should CISOs Respond?
As Artificial Intelligence (AI) becomes more commonplace, CISOs must develop strategies to effectively respond.
1.
Adopt an AI-Centric Strategy: CISOs should deploy AI-driven security solutions to reinforce their existing security techniques, such as anomaly detection, reputation scoring and machine learning analytics.
By leveraging the automated nature of AI-driven solutions, CISOs can reduce manual effort and create an effective security framework.
2.
Implement Multi-Factor Authentication: Multi-factor authentication (MFA) is a security measure that requires users to provide multiple credentials to verify their identity.
This is especially effective with AI-driven security solutions, which can detect malicious behavior and prevent unauthorized access.
3.
Train Employees on AI Security: Employees should be trained on the dangers of AI-driven security and the latest security measures.
CISOs should regularly review and update their training protocols to ensure they are up to date.
Employee training should also include information about privacy laws such as the GDPR and secure coding practices.
4.
Educate Users on Security Awareness: CISOs should provide security awareness information to users to help them spot potential threats.
This can include emails, newsletters, webinars, training courses, and other resources.
Link: https://cloudsecurityalliance.org/blog/2023/07/17/ai-is-here-how-should-cisos-respond/
Cynomi and InfoSystems Cyber Partner to Promote Cyber Resilience Analysis – Benzinga
PRNewswire
The partnership will enable InfoSystems Cyber to offer its clients Cynomi’s AI-powered cybersecurity management platform in conjunction with their cybersecurity consulting methodology, to provide a comprehensive analysis and roadmap to becoming cybersecure for our clients.The partnership will enable InfoSystems Cyber to offer its clients Cynomi’s AI-powered cybersecurity management platform in conjunction with their cybersecurity consulting methodology, to provide a comprehensive analysis and roadmap to becoming cybersecure for our clients.
Link: https://www.benzinga.com/pressreleases/23/07/n33252142/cynomi-and-infosystems-cyber-partner-to-promote-cyber-resilience-analysis
Forbes Technology Council: Why Large Language Models (LLMs) Alone Won’t Save Cybersecurity
Matt Shea
Summary:
This article discusses the potential of Large Language Models (LLMs) to help with cybersecurity, and the potential risks associated with them.
Key takeaways:
1.
LLMs can be used to spoof emails and phone calls, and to discover new zero-day exploits.
2.
LLMs can help lower the bar of using some tools in cybersecurity.
3.
Aligning an organization’s attack surface to detection surface is key to adversary defense in today’s cloud era.
Counter arguments:
1.
LLMs alone won’t save cybersecurity.
2.
LLMs can be used for disinformation and offensive cyberattacks.
Link: https://mixmode.ai/blog/forbes-technology-council-why-large-language-models-llms-alone-wont-save-cybersecurity/
MLOps Market to Surpass USD 17,335 Million Value by 2030, Says P&S Intelligence
CISION (PR Newswire)
The global MLOps market is expected to surpass a value of USD 17335 million by 2030, according to leading market research company PS Intelligence.
The increasing adoption of cloud-native tools and automation-focused MLOps solutions is expected to provide a major boost to the market growth over the forecast period.
The surging need for scalability and robust automation solutions among enterprises to meet ever-changing business requirements is expected to remain the major force driving the market.
The market is further expected to benefit from the rising demand for efficient data pre-processing techniques and highly secure software systems across various industry verticals.
Additionally, the market is also expected to experience robust growth from the presence of several software vendors providing comprehensive services for system architecture, DevOps and microservices development.
The MLOps market is divided into three main segments – Industry Verticals, End-Users and Region Monitoring.
The industry verticals segment includes automotive, banking and finance, telecom and IT, public services/utilities, healthcare, and others.
On the other hand, the end-users space comprises of solutions catering to industrial & manufacturing, retail & e-commerce, software & technology, media & entertainment, government & education, and others.
The demand for MLOps solutions is expected to
Link: https://roboticulized.com/artificial-intelligence/2023/07/17/133725/mlops-market-to-surpass-usd-17335-million-value-by-2030-says-ps-intelligence/
University of Guelph receives provincial funding for new AI program
country104
University of Guelph researchers and their partners are using artificial intelligence (AI) to help farmers make the most of big data in order to improve agricultural production.
The program, called Precision Agri-Food Systems, is funded by the province.
It is co-led by the Ontario Agricultural College, the Ridgetown Campus of the University of Guelph, and the University of Ottawa.
The program involves a number of research projects, jointly managed by the province and its research consortium partners, which use AI and computer vision to help track and monitor how crops are growing, how diseases and pests are affecting plants, and how to produce food more efficiently.
The program will also explore the use of blockchain technology for ensuring accurate and transparent data collection that can be used to inform decision-making in the sector.
Researchers will also use predictive analytics to provide real-time crop condition reports to farmers, relevant agronomic warnings, and related suggestions on managing their crop production in an automated manner.
In addition, the program will explore how AI-related solutions can be used to capture, store, analyze, and wirelessly transmit information about soil, air quality, and food safety to enable accurate decision-making by farmers and government policy
Link: https://country104.com/news/9836642/university-guelph-ai-program/
Modular Agents Boost AI Learning, Enhancing Decision-Making and Adaptability
This news article discusses the potential for modular agents to provide a boost to Artificial Intelligence (AI) learning, and to aid in decision making and adaptability.
Modular agents are small, customizable AI modules that can be incorporated into larger AI architectures, providing the AI with new features and capabilities.
These agents can be used in combination with other AI modules to create more complex systems that are able to offer significant advantages in terms of performance, speed and scalability.
The article discusses how modular agents could improve AI’s ability to learn from experience, as well as enhance decision making and adaptability.
It also notes that modular agents could be used to develop AI systems that are able to carry out tasks more quickly and effectively, and at a lower cost.
Link: https://cryptoprice.ng/en/crypto-news/370739/modular-agents-boost-ai-learning-enhancing-decision-making-adaptability
Robot Dogs Can Now Talk Because Scientists Have Put ChatGPT In Them
Trisha Leigh
Robot Dogs Can Now Talk Because Scientists Have Put ChatGPT In Them
As if robot dogs weren’t borderline disturbing enough when set loose in the wild, now someone has thought it was a good idea
to supply one of the Boston Dynamics prototypes with ChatGPT.Machine learning engineer Santiago Valdarrama posted the video to Twitter, and in it, you can watch “Spot” the robot dog verbally answer system questions.Put that together with a voice-enabled interface, and we have an awesome way to query our data.”
The “dog” shakes its head to say no and bows to say yes, and even though the video should feel harmless, most people don’t see it that way.— Philip Bump (@pbump)
[April 27, 2023]
That said, integrating robots and chatbots has always been the plan
according to Microsoft and OpenAI.“We believe that our work is just the start of a shift in how we develop robotics systems, and we hope to inspire other researchers to jump into this exciting field.”
Get used to the slightly uncomfortable feeling these videos give you, friends.
Link: https://twistedsifter.com/2023/07/robot-dogs-can-now-talk-because-scientists-have-put-chatgpt-in-them/
US SEC developing rules on AI ‘conflicts of interest’
The US Securities and Exchange Commission (SEC) is reportedly developing rules to address potential conflicts of interest arising from the use of artificial intelligence (AI) in investment decision-making.
According to Reuters, the proposed rules will aim to ensure that AI-based strategies applied to portfolio management or automated trading disclose any related potential conflicts of interest.
The proposed rules will require private technology companies and others using AI for portfolio management to inform customers of any potential conflicts of interest stemming from the use of AI and other automated systems.
In particular, firms will be expected to provide full disclosure of methods and techniques used, as well as any underlying algorithms that are used to direct investment decisions.
The SEC is also reportedly considering creating a disclosure template for firms using AI systems.
This template will contain specific information, such as the input data used by the system, where it was sourced, who developed it, and any AI algorithms used.
The SEC’s proposed rules come at a time when the use of AI in financial markets is rapidly increasing.
AI-based portfolios are becoming increasingly popular among private and institutional investors, raising concerns about potential conflicts of interest that may arise in portfolio management.
By introducing new regulations, the SEC aims to ensure that AI-based strategies are transparent
Link: https://srnnews.com/us-sec-developing-rules-on-ai-conflicts-of-interest-2/
How FedEx Dataworks is using analytics, AI to fortify supply chains
News Feed Editor
FedEx DataWorks, a subsidiary of international courier FedEx, is using advanced analytics and artificial intelligence (AI) to strengthen supply chains across the globe.
The company has taken an innovative approach to supply chain management, allowing businesses to increase efficiency and unlock hidden potential within their supply chains.
FedEx DataWorks’ technology suite uses a combination of predictive analytics, AI, machine learning, and data science to analyze and manage the increasing complexity of supply chains.
This data is used to create insights and develop intelligence that can be used to inform decisions for streamlining operations, reducing complexities, and improving the flow of goods.
For instance, FedEx DataWorks can identify areas of the supply chain where inefficiencies exist and devise solutions to those problems, such as introducing better logistics procedures or recommending changes to warehouse inventory management strategies.
It also has the capability to recognize changing market patterns, detect fraudulent activities, and monitor customer satisfaction levels.
The technology used by FedEx DataWorks is also employed to gain an in-depth understanding of the supply chain and to improve upon it.
This includes trying to lower inventory costs, reduce transit times, and ensure accuracy of orders.
Additionally, DataWorks can help automate and monitor shipments in real-time, giving businesses greater visibility and
Link: https://www.rocketnews.com/2023/07/how-fedex-dataworks-is-using-analytics-ai-to-fortify-supply-chains/
Plurilock Announces New AI-Driven Cloud Access Security Broker Technology for Generative AI Tool…
Plurilock announced a new AIDriven cloud access security broker (CASB) technology for generative AI tools and has submitted its US provisional patent application.
This technology gives customers the ability to secure access to resources stored within cloud services, including those stored on the public cloud and on-site services.
It also enables organizations to extend their authentication and authorization strategies into the cloud to provide the highest level of security for their data.
With features such as granular access control, user and device profiling, and pre-authentication hashing, customers can genereate a more secure cloud environment.
Additionally, Plurilock’s CASB technology is designed to be compatible with existing authentication and authorization protocols, allowing organizations to leverage their existing security infrastructure while also adopting the latest in access security technology.
Plurilock is confident that its CASB technology will provide customers with the ability to protect their data in the cloud, while also providing a more secure and seamless user experience.
Link: https://www.newsfilecorp.com/release/173816/Plurilock-Announces-New-AIDriven-Cloud-Access-Security-Broker-Technology-for-Generative-AI-Tools-and-Submits-U.S.-Provisional-Patent-Application
Bеst Practicеs for Implеmеnting Machinе Lеarning in Organization
Alfred
1.
Create a Clear Machine Learning Vision: To ensure successful implementation of machine learning in an organization, it’s important to have a clear vision outlining the desired objectives and how machine learning fits into the business strategy.
Without a clear vision, it’s impossible to set the right expectations and effectively measure success.
2.
Establish a Dedicated AI/ML team: Machine learning projects require a wide range of highly specialized skills, including data scientists, software engineers, and product managers.
To be successful, it’s essential to build a cross-functional team consisting of the right personnel who can effectively build, train, and deploy machine learning models.
3.
Assess Data Requirements: Data is the foundation of any machine learning project, meaning that it’s critical to assess data requirements early on.
This includes understanding the data sources available, the format and quality of data assets, and partnering with business stakeholders to identify the right datasets that can be leveraged for the desired machine learning objectives.
4.
Develop a Systematic Process: Building a successful machine learning model requires a lot of trial and error.
To maximize efficiency, it’s best to develop a systematic process for building and validating models.
This process should
Link: https://publicistpaper.com/bеst-practicеs-for-implеmеnting-machinе-lеarning-in-organization/
Splunk unveils Splunk AI to ease security and observability through generative AI – USA News Hub
allusanewshub.com
Splunk Inc. today announced the introduction of Splunk AI, a patented artificial intelligence and machine learning platform designed to make it easier for businesses to manage and observe their operations with greater accuracy.
The new platform leverages Splunk’s existing observability solution to leverage generative AI to detect threats, recognize patterns, and quickly alert administrators to potential problems.
Splunk AI is being released on Splunk Cloud, Splunk Enterprise, and Splunk Insights.
Splunk AI’s generative AI technology can detect and recognize extraordinary patterns outside the normal scope of rule-based detection.
By leveraging algorithms tuned to identify unusual behavior in large data sets, Splunk AI can help businesses become more resilient.
It also enables a faster, more proactive response to potential security threats, enabling enterprises to keep their data and infrastructure more secure.
In addition to providing security-focused solutions, Splunk AI also can help organizations create insights and operationalize their data faster.
The platform offers features such as “hotel lobbies” which leverage ML and clustering to create encapsulated models, streams, processes, and metric of activity.
By quickly and accurately identifying anomalies, Splunk AI can help organizations more easily improve their operations to maintain business continuity.
Link: http://www.allusanewshub.com/2023/07/18/splunk-unveils-splunk-ai-to-ease-security-and-observability-through-generative-ai/
The Power of AI: Artificial Intelligence and the M&A Process
David Marshall
Consider these potential tools and insights:
Code review and quality assessment: AI tools can be used to analyze the software’s source
code, detecting potential bugs, security vulnerabilities, and areas where the
code does not follow best practices.Value evaluation: AI can help appraise the software’s monetary value by
considering all these factors, along with others like market trends, competitor
software and potential future uses of the software.AI and the M&A Outcome
Throughout
post-M&A integration, AI, with its superior ability to analyze large
volumes of data, can identify unseen patterns, trends, and insights that can
lead to value-creation opportunities, including identifying efficiencies,
synergies and potential areas for growth or innovation.
Link: https://vmblog.com:443/archive/2023/07/18/the-power-of-ai-artificial-intelligence-and-the-m-a-process.aspx
Factors.ai secures $3.6 million in pre-Series A funding led by Stellaris Venture Partners | The …
MSME Desk
With the funds, Factors.ai plans to expand its go-to-market teams, including employees in sales, marketing, and customer success, and also invest more in its products and engineering divisions.Srikrishna Swaminathan, Co-founder, and CEO of Factors.ai, said, “This funding reaffirms our mission to revolutionise B2B go-to-market strategies and empower businesses to grow exceptionally.”Alok Goyal, Partner, Stellaris Venture Partners, added, “B2B marketing is undergoing rapid evolution, growing increasingly complex, and existing analytics and attribution solutions are ill-equipped to deal with this change.
Link: https://www.financialexpress.com/industry/sme/msme-factors-ai-secures-3-6-million-in-pre-series-a-funding-led-by-stellaris-venture-partners/3176588/
Darktrace unveils AI-enabled capabilities for incident response
Gaurav Sharma
Security Brief AU
Darktrace has announced the launch of Darktrace Heal, its AI-enabled product to help businesses more effectively prepare for, rapidly remediate, and recover from cyber-attacks.
Heal provides security teams with unique abilities to simulate actual attacks within their environments, create bespoke incident response plans as cyber incidents unfold, and automate actions to respond to and recover from those incidents rapidly.
With Heal, security teams can simulate real-world cyber incidents, allowing them to prepare for and practice responding to complex attacks in their own environments.
They can create bespoke, AI-generated playbooks as an attack unfolds based on the details of their environment, the attack, and lessons learned from their previous simulations.
This reduces information overload, prioritises actions, and enables faster decision-making at critical moments.
The security teams can also automate actions from the response plan to rapidly stop and recover from the attack within the Heal interface.
They can create a full incident report, including an audit trail of the incident response with details of the attack, actions Heal suggested, and actions taken by the security team for future learning and to support compliance efforts.
Heal’s simulated incidents are a first-of-its-kind capability for security teams to safely run live simulations of real-world cyber-attacks ranging from data theft and ransomware encryption to rapid worm propagation, all in their environments and involving their assets.
Security teams are expected to flawlessly manage incident response in the face of a live, rapidly unfolding, often novel attack, usually without any realistic practice.
Heal works with Detect and Darktrace Prevent to build a live picture of the environment and attack and integrates with Darktrace Respond to prioritise, isolate, and Heal key assets to cut off and shorten attacks.
Its introduction closes Darktrace’s Cyber AI Loop, bringing together Detect, Prevent, Respond, and Heal into a single platform where each element draws insights from and continuously reinforces the others to create a best-in-class cyber defence.
Link: https://securitybrief.com.au/story/darktrace-unveils-ai-enabled-capabilities-for-incident-response
4 Methods to Increase Your Profits this Year with Analytics and AI – insideBIGDATA
@insidebigdata
1.
Optimize pricing.
Using analytics and AI, companies can gain insights into what customers are willing to pay, which can help them identify where to adjust their prices to maximize profits.
This gets even better when combined with predictive analytics, which can help companies anticipate customer price sensitivities and adjust prices in order to increase profits and maximize revenue.
2.
Automate marketing.
By leveraging analytics and AI for marketing, companies can get a better understanding of who their customers are and what they need, allowing them to deliver personalized content, ads, and other communications more effectively and efficiently.
This can result in more conversions and greater ROI.
3.
Enhance customer experience.
Using analytics and AI, companies can gain insights into customer behavior, preferences, and other factors that can be used to improve their products and services.
This can help ensure customers have a consistently positive experience and make them more likely to remain loyal and keep recommending your business to others.
4.
Boost sales.
Analytics and AI can be used to gain insights into customer buying behavior, so companies can identify what products and services customers are looking for and better target and pitch their offerings.
This can help boost sales and increase profits.
Link: https://insidebigdata.com/white-paper/4-methods-to-increase-your-profits-this-year-with-analytics-and-ai-2/
Avant Technologies, Inc. Appoints Danny Rittman as Chief Information Security Officer – Avant Te…
Globe Newswire
LAS VEGAS, NV, via NewMediaWire – Avant Technologies, Inc.
AVAI (“Avant” or the “Company”), an artificial intelligence technology company specializing in acquiring, creating, and developing innovative and advanced technologies utilizing artificial intelligence (AI), today announced the appointment of Dr.
Danny Rittman as Chief Information Security Officer (CISO).Avant’s Chief Operating Officer, Paul Averill, said of Dr.
Rittman’s appointment, “Danny is a deeply experienced technology leader with a proven track record of driving innovation and developing solutions with cybersecurity frameworks.Dr.
Rittman has led AI-based cybersecurity technology development for IoT, Big Data, Computer Vision and Networks, and led the cybersecurity team in designing and implementing ML-Based robust security measures to protect the company’s networks, systems, and data.
Link: https://www.benzinga.com/pressreleases/23/07/g33367848/avant-technologies-inc-appoints-danny-rittman-as-chief-information-security-officer
FraudGPT AI Bot Authors Malicious Code, Phishing Emails, and More
CamS@secureworld.io (Cam Sivesind)
Some of the features of FraudGPT include its ability to:
• Write malicious code
• Create undetectable malware
• Find non-VBV bins
• Create phishing pages
• Create hacking tools
• Find groups, sites, markets
• Write scam pages/letters
• Find leaks, vulnerabilities
• Learn to code/hack
• Find cardable sites
• Escrow available 24/7
• 3,000+ confirmed sales/reviews
Here are some comments from cybersecurity vendor experts:
Pyry Åvist, Co-founder and CTO at Hoxhunt:
“While ChatGPT works even for cybercriminals just smart enough to rub two brain cells together, the new FraudGPT offers added convenience, no ethical guardrails, and provides hand-holding throughout the phishing campaign creation process.[RELATED:
[Research Examines WormGPT, an AI Cybercrime Tool Used in BEC Attacks](/industry-news/wormgpt-ai-cybercrime-tool)]
Timothy Morris, Chief Security Advisor at Tanium:
“This could be another exit scam.Åvist had this to add on security awareness training and the efficacy of human social engineers versus AI tools:
“In
this study we performed, a phishing prompt was created and our human social engineers and ChatGPT had one afternoon to craft a phishing email based on that prompt.
Link: https://www.secureworld.io/industry-news/fraudgpt-malicious-ai-bot
FraudGPT AI Bot Authors Malicious Code, Phishing Emails, and More
CamS@secureworld.io (Cam Sivesind)
Some of the features of FraudGPT include its ability to:
• Write malicious code
• Create undetectable malware
• Find non-VBV bins
• Create phishing pages
• Create hacking tools
• Find groups, sites, markets
• Write scam pages/letters
• Find leaks, vulnerabilities
• Learn to code/hack
• Find cardable sites
• Escrow available 24/7
• 3,000+ confirmed sales/reviews
Here are some comments from cybersecurity vendor experts:
Pyry Åvist, Co-founder and CTO at Hoxhunt:
“While ChatGPT works even for cybercriminals just smart enough to rub two brain cells together, the new FraudGPT offers added convenience, no ethical guardrails, and provides hand-holding throughout the phishing campaign creation process.[RELATED:
[Research Examines WormGPT, an AI Cybercrime Tool Used in BEC Attacks](/industry-news/wormgpt-ai-cybercrime-tool)]
Timothy Morris, Chief Security Advisor at Tanium:
“This could be another exit scam.Åvist had this to add on security awareness training and the efficacy of human social engineers versus AI tools:
“In
this study we performed, a phishing prompt was created and our human social engineers and ChatGPT had one afternoon to craft a phishing email based on that prompt.
Link: https://www.secureworld.io/industry-news/fraudgpt-malicious-ai-bot