Table of Contents
- DataStax teams up with Microsoft to simplify AI development
- AI and data infrastructure drives demand for open source startups
- FrugalGPT and Reducing LLM Operating Costs
- Enriching ERP and Large Enterprises with Generative AI: Step 1 of the Framework
- MLOps and DevOps: Collaborating for Vector Database Excellence in Machine Learning Projects
- Ten Learnings from the 2024 Gartner Data & Analytics Summit
- Pure Storage and NVIDIA collaborate on RAG to customize Generative AI and to help eliminate hall…
- Vector Database Market Size, Share, Challenges and Growth Analysis Report 2033
- Bridgeline Releases Zeus Update with Concept and Image Search
- FLock.io raises $6M for decentralized blockchain AI training platform – SiliconANGLE
- Top Generative AI Predictions for 2024: A Boardroom Perspective
- What role does CXL play in AI? Depends on who you ask
- MyScaleDB Open-sourced: the SQL Vector Database with High Performance – Banking Industry Today -…
- More Than Half of Businesses Plan to Adopt AI Within the Next Two Years
- AI: A Double-Edged Sword For CISOs
- Check Point Announces a New Collaboration with Microsoft to Supercharge Infinity AI Copilot with…
- SADA Achieves Over 300% Increase in Generative AI and Machine Learning Projects in 2023
- Sands Lab Partners with Microsoft Korea to Develop Next-Generation AI Cybersecurity Technology
- RingCentral reimagines the business phone system with real-time AI
- Marriott Bonvoy embraces AI to better match travellers with their dream vacays
- Microsoft Bing Head to Step Down Amid AI Push
- Hewlett Packard Enterprise Leverages GenAI to Enhance AIOps Capabilities of HPE Aruba Networking…
- IA, spending in EMEA will increase by two-thirds in 2024
- AWS, Accenture and Anthropic Join Hands To Boost AI Adoption Among Enterprises
DataStax teams up with Microsoft to simplify AI development
Sean Mitchell
Channel Life – Australia
DataStax announced a significant collaboration with Microsoft Semantic Kernel to simplify the development of enterprise-level retrieval-augmented generation (RAG) applications and AI agents
The collaboration integrates DataStax’s Astra DB vector database with Microsoft’s Semantic Kernel open-source SDK
This integration makes it easier for developers to leverage their existing applications and data into the AI spectrum by building RAG applications and AI agents
Semantic Kernel enables developers to build agents capable of invoking existing code, managing contextual conversations, and integrating with AI models like OpenAI and Azure OpenAI
Astra DB provides a vector database that delivers structured and vector data with high relevance, low latency, and global scale to AI applications built with Semantic Kernel
It allows developers skilled in languages like C#, Python, or full-stack to develop powerful RAG applications and AI agents using their enterprise data
The collaboration aims to meet the increasing demand for RAG applications and advanced AI agents while streamlining the development process
Independent stakeholders like healthcare company Skypoint appreciate the integration for enhancing insight generation and user experiences through private AI copilots.
Link: https://channellife.com.au/story/datastax-teams-up-with-microsoft-to-simplify-ai-development
AI and data infrastructure drives demand for open source startups
notisia365.cw
Notisia 365
The Runa Open Source Startup (ROSS) Index highlights the fastest-growing commercial open source software startups each year
In 2023, the top spot went to LangChain, an open source framework for building apps with large language models, reflecting the high demand for AI tools
Over half (26 out of 50) of the top trending open source startups in 2023 were related to AI and data infrastructure, capitalizing on the generative AI boom
Other top startups included Reflex (web apps in Python), AITable (spreadsheet AI chatbots), Sismo (privacy-focused data sharing), and Qdrant (vector database)
The report found an increase in funding for these top startups, hitting $513 million in 2023, up 32% from 2022
While the U.S. had the most startups (26) on the list, Europe saw a 20% rise with 23 companies, led by France and the UK
TypeScript remained the most popular programming language used by 38% of the top startups, though Python and Rust grew in usage
The methodology considers factors like GitHub stars growth rate, commercial focus, funding under $100M, and a “reasonably open source” product connection
The liberal definition of “open source” includes source-available licenses like MongoDB’s to reflect commercial perceptions.
Link: https://notisia365.com/ai-and-data-infrastructure-drives-demand-for-open-source-startups
FrugalGPT and Reducing LLM Operating Costs
Matthew Gunton
Medium
Here are the key points about the “FrugalGPT” cost-saving architecture for LLM-driven apps:
1) Large Language Models (LLMs) are significantly more expensive to run compared to other AI models, posing cost challenges for companies.
2) The paper introduces a framework to reduce operating costs while maintaining quality by using a cascade of LLMs ordered from least to most expensive.
3) The user query starts with the cheapest LLM
If the answer is good enough based on a scoring model, it is returned
Otherwise, it moves to the next cheaper LLM.
4) A small DistilBERT model is used to score the quality of answers at a negligible cost compared to the LLMs.
5) This cascading approach can provide better average answer quality than just using the best (and most expensive) LLM alone.
6) The framework takes advantage of the vast cost differences between LLM providers like OpenAI, Anthropic, etc. for the same level of performance.
7) Potential future enhancements could include caching answers in a vector database for similarity searches before using LLMs.
8) For user-specific interactions, the scoring system would need to be aware of the user context
In essence, FrugalGPT optimizes for cost and quality by intelligently routing queries through a cascade of LLMs from cheapest to most expensive, using an efficient scoring model to determine answer quality at each step.
Link: https://towardsdatascience.com/frugalgpt-and-reducing-llm-operating-costs-ff1a6428bf96
Enriching ERP and Large Enterprises with Generative AI: Step 1 of the Framework
Jason Tan, Brian Ferris
Routledge Taylor & Francis Group
The introduction of ChatGPT in November 2022 marked a major turning point and paradigm shift in AI, prompting tech giants and startups to race to develop their own versions
This has sparked interest from business stakeholders like Chief Data and Analytics Officers (CDAOs) to explore the use cases and potential of generative AI and large language models (LLMs) for their enterprises
The author presents the first step of a proven framework for developing a “Conversation CoPilot” using generative AI to assist technical founders, and deploying generative AI in enterprises
A major pitfall to avoid is the “Shiny Object Syndrome” – adopting new tech without a clear understanding of implications
CDAOs must collaboratively explore use cases aligned with core objectives
Potential use cases include enhancing customer service chatbots, knowledge management, marketing/sales copy, and deploying Conversation CoPilots
Organizations should collaborate with generative AI experts to build an MVP addressing specific business goals, like using vector databases, embeddings, and LLMs for contextualized customer replies
Engaging the right internal and external stakeholders, and identifying required data/systems is crucial for successful implementation
Organizations must choose an appropriate strategy – using existing SDKs, partnering with consultants, or building in-house solutions
Finally, insights from generative AI must be translated into actionable outcomes by integrating outputs into existing workflows and decision processes.
Link: https://www.routledge.com/blog/article/enriching-erp-and-large-enterprises-with-generative-ai-step-1-of-the-framework
MLOps and DevOps: Collaborating for Vector Database Excellence in Machine Learning Projects
Adnan Hassan
Marktech Post
Here’s a summary of the key points on the collaboration between MLOps and DevOps for managing vector databases in machine learning projects:
Introduction:
– Effective collaboration between Machine Learning Operations (MLOps) and Development Operations (DevOps) is crucial for successful machine learning (ML) projects, especially those involving vector databases
Roles:
– MLOps focuses on automating and optimizing the end-to-end ML lifecycle, including model deployment, monitoring, and maintenance.
– DevOps streamlines workflows between development and operations teams, enabling faster software delivery and infrastructure management
Collaboration in Vector Databases:
– Vector databases are essential for storing and querying complex data structures used in ML tasks like similarity search and recommendation systems.
– MLOps and DevOps collaborate to ensure vector databases are scalable, performant, and seamlessly integrated into ML pipelines
Practical Application: Building a Recommendation System:
– Data ingestion and preprocessing (DevOps sets up infrastructure)
– Model training and evaluation (MLOps automates process using vector databases)
– Deployment and monitoring (joint effort for automation, scaling, and performance monitoring)
Process Cycle:
1) Planning and requirement analysis
2) Infrastructure setup (DevOps)
3) Data preparation with vector databases
4) Model development and training (MLOps)
5) Continuous integration and deployment (DevOps practices)
6) Monitoring and maintenance (ongoing collaboration)
Conclusion:
The synergy between MLOps (automating ML lifecycle) and DevOps (software delivery and operations) enables robust, scalable, and high-performing ML applications by efficiently managing vector databases and ensuring seamless integration into production environments.
Link: https://www.marktechpost.com/2024/03/27/mlops-and-devops-collaborating-for-vector-database-excellence-in-machine-learning-projects
Ten Learnings from the 2024 Gartner Data & Analytics Summit
Subhash Kari
InfoCepts Data & AI
Here are the key takeaways from the 2024 Gartner Data & Analytics Summit and your reflections:
1) D&A leaders must collaborate with CEOs to define and execute their company’s AI ambition as a business driver or enabler, focusing on skillsets, tools, mindsets, data, and trust.
2) Executives face the challenge of balancing AI innovation with new regulations around AI risks and harms across different regions.
3) The convergence of AI and BI requires reimagining analytics delivery – faster outputs vs. fundamentally transforming how analytics powers the business.
4) McDonald’s found success by first understanding business context, building a business-led story, and taking a “front back” approach to governing enterprise data and AI.
5) Niche product companies are innovating to support retrieval-augmented generation (RAG) use cases at enterprise scale.
6) Companies are finally prioritizing enterprise data quality practices as a prerequisite for successful AI adoption.
7) Learning from real-world Gen AI acceleration at companies like Ally and Clearwater provides valuable lessons on architecture, processes, and best practices.
8) Deploying AI at scale requires mastering ModelOps, DataOps and DevOps to address challenges around transparency, consistency and skilled resources.
9) Leaders should focus on execution rather than agonizing over making the “right” technology choices in a rapidly evolving landscape.
10) As AI gets smarter, humans must invest in building intentional expertise to collectively achieve goals alongside AI
The overarching theme is the critical importance of a holistic, strategic approach to data, analytics and AI, balancing innovation with governance, while continuously building organizational capabilities.
Link: https://www.infocepts.ai/blog/ten-learnings-from-the-2024-gartner-data-analytics-summit
Pure Storage and NVIDIA collaborate on RAG to customize Generative AI and to help eliminate hall…
Derek du Preez
Diginomica
Large language models (LLMs) today are often too vague or prone to hallucinations for serious enterprise use cases
Vendors are working on approaches to customize LLMs using enterprise data to improve accuracy and relevance
Pure Storage and NVIDIA announced they are collaborating on developing RAG pipelines and vertical-specific RAG solutions, starting with financial services
RAG allows augmenting LLMs with proprietary enterprise data sources by indexing and retrieving relevant context to include in the model’s input
This provides more precise, domain-specific outputs compared to generic LLM outputs trained only on public data
RAG is one of three main approaches for customizing LLMs, along with fine-tuning models on enterprise data and using prompting techniques like LORAs
Combining approaches like RAG and LORAs is expected to yield better results than using them individually
A key benefit of RAG is the potential to reduce hallucinations by grounding outputs in factual enterprise data and enabling guardrails
While fine-tuning is most expensive, RAG provides a balanced approach by augmenting LLMs with relevant context in a scalable manner
The collaboration aims to advance RAG pipelines and vertical solutions to make generative AI more accurate, current and trustworthy for enterprise adoption.
Link: https://diginomica.com/pure-storage-and-nvidia-collaborate-rag-customize-generative-ai-and-help-eliminate-hallucinations
Vector Database Market Size, Share, Challenges and Growth Analysis Report 2033
Vector Database Market Size, Share, Challenges and Growth Analysis Report 2033 Author : Amol Shinde | Published On : 28 Mar 2024 Vector Database Market Overview: The [Vector Database Market](https://wemarketresearch.com/reports/vector-database-market/1353) is estimated to increase from USD 2.1 billion in 2023 to USD 5.3 billion by 2033, at a CAGR of 25% over the forecast period.Request for A Sample of This Research Report @ [https://wemarketresearch.com/sample-request/vector-database-market/1353](https://wemarketresearch.com/sample-request/vector-database-market/1353) Market Dynamics: These databases are used in a variety of industries, helping to drive market expansion as specialized demands arise.Market Analysis By Industry: The vector database market is divided into the following industries: financial services, healthcare and life sciences, retail and e-commerce, manufacturing, telecommunications, government and public sector, energy and utilities, transportation and logistics, media and entertainment, and others.Ask For Customization: [https://wemarketresearch.com/customization/vector-database-market/1353](https://wemarketresearch.com/customization/vector-database-market/1353) Market Segmentations: Market, By Deployment Mode: – On-Premises – Cloud-Based Market, By Database Type: – Relational Vector Databases – NoSQL Vector Databases – New SQL Vector Databases Market, By Industry: – Financial Services – Healthcare and Life Sciences – Retail and E-commerce – Manufacturing – Telecommunications – Government and Public Sector – Energy and Utilities – Transportation and Logistics – Media and Entertainment – Other Industries Market, By Pricing Model: – Subscription-Based – Pay-Per-Use – Perpetual Licensing Market Analysis by Regions The vector database market is examined in North America, Europe, APAC, South America, and MEA.Key Companies Involved in the Vector Database Market – Microsoft – Elastic – Alibaba Cloud – Mongo DB – Radis – Single Store – Zillah – Pinecone – AWS – Data tax – GSI technology Get a Purchase Report Here @ [https://wemarketresearch.com/purchase/vector-database-market/1353?license=single](https://wemarketresearch.com/purchase/vector-database-market/1353?license=single) Report Related Link: 6G Market: [https://www.prnewswire.com/news-releases/6g-market-revenue-to-hit-usd-48-95-billion-by-2033–says-we-market-research-302017665.html](https://www.prnewswire.com/news-releases/6g-market-revenue-to-hit-usd-48-95-billion-by-2033–says-we-market-research-302017665.html) About We Market Research: WE MARKET RESEARCH is an established market analytics and research firm with a domain experience sprawling across different industries.
Link: https://articlescad.com/vector-database-market-size-share-challenges-and-growth-analysis-report-2033-64497.html
Bridgeline Releases Zeus Update with Concept and Image Search
Bridgeline Digital, Inc.
Globe Newswire
Bridgeline Digital has released Smart Search by HawkSearch, which leverages AI including large language models (LLMs) and vector databases to enhance online shopping search
Smart Search has three main capabilities: Concept Search, Image Search, and Generative AI (GenAI)
Concept Search understands customer intent behind searches to provide more relevant product results based on descriptions
Image Search allows uploading photos to find and add similar products to the shopping cart
GenAI automatically creates, corrects or expands product descriptions in the catalog as well as landing pages with strong keywords to improve SEO and customer experience
Smart Search supports 50 languages for global accessibility
Prior to public release, select companies like Max Warehouse and Nail Gun Depot used Smart Search with outstanding results during its limited release
The AI-powered capabilities aim to provide a more intuitive online shopping experience to convert visitors to buyers for Bridgeline’s customers
Bridgeline’s CEO highlights the rapid AI progress enabling Smart Search to help customers stay ahead of competitors in the online marketplace
A webinar on April 11th will showcase the new Smart Search features and functionality.
Link: https://www.globenewswire.com/news-release/2024/03/28/2854011/9238/en/Bridgeline-Releases-Zeus-Update-with-Concept-and-Image-Search.html
FLock.io raises $6M for decentralized blockchain AI training platform – SiliconANGLE
Kyt Dotson
Silicon Angle
FLock.io raised $6 million in seed funding led by Lightspeed Faction and Targus Capital to build a blockchain-based platform for decentralized co-creation of AI models
The platform enables users to contribute compute, data, or training scripts to collectively train and fine-tune AI models in a decentralized manner without centralized corporate oversight
It uses federated learning where training data remains on individual devices instead of being sent to a central server, enhancing privacy and security
Users earn “FLock points” for participating in model co-creation, customization, evaluation which can unlock future rewards and benefits
Privacy is preserved through zero-knowledge proofs, homomorphic encryption and secure multi-party computation when accessing data
The platform is model-agnostic, supporting anything from statistical models to large language models like ChatGPT
By decentralizing AI model training, FLock aims to enable broader participation while addressing data privacy and regulatory concerns
Future plans include decentralized hosting by allowing users to contribute computing resources and integration with blockchain networks for accessibility to decentralized apps
The seed funding will be used to further develop FLock’s decentralized machine learning and training platform putting AI governance in users’ hands.
Link: https://siliconangle.com/2024/03/28/flock-io-raises-6m-decentralized-blockchain-ai-training-platform
Top Generative AI Predictions for 2024: A Boardroom Perspective
Abhishek Nag
Embarking On Voyage
Predictions:
- Shift from proof-of-concepts to pilot deployments of GenAI in real operations to assess business impact
- Increased investments in data infrastructure to acquire and process unstructured data for GenAI models
- Growing emphasis on GenAI explainability and ethical AI development to ensure transparency and mitigate biases
- Maturing of GenAI-as-a-Service market with more diverse offerings from cloud providers
- Convergence of GenAI with other AI subfields like computer vision and NLP for more powerful AI systems
Strategic Boardroom Implications:
- Allocate resources for pilot GenAI deployments focused on high-ROI use cases
- Invest in data pipelines, vector databases, and embeddings for unstructured data processing
- Champion explainable GenAI solutions and establish ethical AI governance frameworks
- Evaluate partnering with GenAI-as-a-Service providers to accelerate adoption
- Foster interdepartmental collaboration to merge GenAI with other AI capabilities
- Provide leadership oversight for responsible GenAI development and deployment
- Upskill workforce and prepare for workforce transformation due to AI automation
- Promote human-AI collaboration synergizing human expertise with GenAI capabilities
The post emphasizes GenAI’s transformative potential but stresses the critical human factors – leadership, ethics, workforce readiness, and human-AI collaboration for successful GenAI integration into enterprises.
Link: https://embarkingonvoyage.com/top-generative-ai-predictions-for-2024/
What role does CXL play in AI? Depends on who you ask
Adam Armstrong
Tech Target
Here are the key points about the role of Compute Express Link (CXL) in AI workloads and data centers:
– At Nvidia’s GTC 2024 conference, CXL was notably absent from discussions around accelerating AI workloads, despite being touted for years as a way to expand memory capacity for accelerators like GPUs.
– While Nvidia favors its proprietary NVLink interconnect, some vendors like MemVerge, Micron, and Supermicro demonstrated at GTC how CXL can increase GPU utilization for large language models by expanding the GPU memory pool using lower-cost memory.
– There are differing views on CXL’s relevance for AI – some analysts argue Nvidia’s preference for NVLink limits CXL’s adoption, while others see CXL playing a broader role beyond just GPU-to-GPU connectivity.
– Key CXL use cases include memory expansion and pooling to improve utilization and performance, especially for workloads like database processing for retrieval-augmented generation in AI.
– For generative AI specifically, CXL could help expand limited high-bandwidth memory (HBM) capacity on GPUs in a cost-effective manner to sustain GPU utilization.
– While trading off some performance, the MemVerge/Micron/Supermicro demo showed significant GPU utilization gains by using CXL to expand memory beyond just HBM.
– Opinions remain divided on whether CXL will be critical for AI going forward or have a more supporting role in general data center memory expansion
In summary, CXL’s role in the AI era is still being debated, with potential for memory expansion benefits but facing adoption challenges from Nvidia’s proprietary interconnects.
Link: https://www.techtarget.com/searchstorage/news/366575974/What-role-does-CXL-play-in-AI-Depends-on-who-you-ask
MyScaleDB Open-sourced: the SQL Vector Database with High Performance – Banking Industry Today -…
Nan Xiang
EIN News
MyScaleDB Open-sourced: the SQL Vector Database with High Performance MyScaleDB is Now Open Source
MyScaleDB, the innovative SQL vector database, is to announce that it is now open-source.SINGAPORE, SINGAPORE, SINGAPORE, March 30, 2024 / EINPresswire.com / — MyScaleDB , the SQL vector database, is thrilled to announce that it is now open-source as of March 29.MyScaleDB is a high-performance, scalable, and cost-effective database that harnesses SQL queries to accelerate vector search and processing.Fully SQL-Compatible:
– Fast, powerful, and efficient vector search, filtered search, and SQL-vector join queries
– Use SQL with vector-related functions to interact with MyScaleDB.By open-sourcing our technology, we aim to foster innovation and collaboration within the AI developer community, ultimately leading to groundbreaking solutions in AI data management and analytics.”
Wen Dai, General Manager of Solutions Architecture, Greater China, AWS: “Vector data processing is a critical part for LLM infrastructure, while SQL can provide significant scalability and convenience to application developers.With its open source availability, developers will have options to leverage the value of structured data to work with different LLMs for diversified use cases, for better performance, lower cost, and faster innovation paces.”
Welcome to the GitHub repository and start building with MyScaleDB.
Link: https://www.einnews.com/pr_news/698787045/myscaledb-open-sourced-the-sql-vector-database-with-high-performance
More Than Half of Businesses Plan to Adopt AI Within the Next Two Years
Michael C
Breaking News International
The article discusses the rapid adoption of artificial intelligence (AI) technology across industries and the associated cybersecurity concerns
According to the 2024 State of IT survey by Spiceworks, 57% of organizations have concrete plans to implement AI solutions, with 25% already integrating AI into their operations and 32% preparing to do so within the next two years
Key highlights:
AI Adoption and Cybersecurity Risks:
AI adoption is accelerating, with the potential to revolutionize business operations and drive efficiency
However, AI introduces new attack vectors and risks that organizations must be prepared to address
Increased IT Budgets:
Two-thirds (66%) of organizations plan to increase their total IT budget in 2024 compared to the previous year
The leading reasons for budget increases include upgrading outdated IT infrastructure, increased priority on IT projects, and increased security concerns (48% of businesses)
Cybersecurity Investments:
In 2023, the purchase of cybersecurity solutions/services/apps (61%) and cybersecurity training for employees (56%) were the most popular IT investments among businesses.
35% of companies planned to allocate up to a quarter of their organizational budget for IT needs in 2023, while 32% planned to invest up to half of their budget
Importance of Cybersecurity for AI Adoption:
As AI adoption accelerates, allocating adequate resources for cybersecurity will be crucial to safeguarding these cutting-edge technologies and the sensitive data they process.
Link: https://breakingnewsinternational.com/it/more-than-half-of-businesses-plan-to-adopt-ai-within-the-next-two-years
AI: A Double-Edged Sword For CISOs
News Room
Artificial Intelligence (AI) is a double-edged sword in the world of cybersecurity, posing both threats and opportunities
CISOs must identify these threats and leverage AI to enhance their security measures
AI has become a powerful tool for cybercriminals, enabling highly personalized phishing attacks and sophisticated malware
However, AI also equips CISOs with tools to protect their companies more efficiently, such as threat detection, risk forecasting, and streamlined security tasks
Highlights:
Cisco’s security head and Gartner have highlighted the transformative nature of AI in web security and its pivotal role for CISOs
AI enables cybercriminals to create highly personalized phishing emails that closely resemble trusted sources, increasing the likelihood of deceiving individuals
The top five AI cybersecurity threats include AI-phishing, increased sophistication of APTs, automated vulnerability hunting, evasion of detection systems, and disinformation and deepfakes
The top five AI-driven cybersecurity opportunities for CISOs include threat detection, forecasting risks, streamlined security tasks, spotting phishing attempts, and smart access control
AI can filter through data at unbelievable speed, detecting security risks by identifying rare patterns and allowing companies to be more proactive
AI can forecast future attacks, helping companies prepare for and mitigate threats that are yet to strike
AI can replace basic functions like supervising network activities and filtering out false alarms, allowing CISOs to focus on more complicated security issues
Incident reporting can be improved by eliminating 80% of human efforts and speeding up the process from days to hours
CISOs must employ the right AI solutions to transform obstacles into stepping stones for a safer digital future.
Link: https://thefinancialnews247.com/a-double-edged-sword-for-cisos
Check Point Announces a New Collaboration with Microsoft to Supercharge Infinity AI Copilot with…
Check Point Software Technologies INC
Globe Newswire
Check Point Software Technologies Ltd. has announced a collaboration with Microsoft to enhance its Check Point Infinity AI Copilot, a generative AI service that automates security tasks and improves security effectiveness
The collaboration utilizes the Microsoft Azure OpenAI Service to address the challenges of increasing cyber threats and a shortage of cyber security professionals
The enhanced solution benefits from advanced large language models (LLMs) provided by Microsoft, making it capable of addressing a wide range of cybersecurity challenges with greater efficiency and effectiveness
Check Point has also developed its own prompt engineering using retrieval augmented generation (RAG) best practices to improve the reliability and accuracy of the service
Highlights:
Check Point Infinity AI Copilot uses AI to automate common and complex security tasks, lightening the workload for security teams and improving security effectiveness against sophisticated attacks
The collaboration with Azure OpenAI Service is a key part of Check Point’s strategy to produce generative AI cyber security products and services
Infinity AI Copilot benefits from advanced LLMs provided by Microsoft, making it capable of addressing a wide range of cybersecurity challenges with greater efficiency and effectiveness
Check Point customers will experience accelerated administration resolution times, advanced incident response, and unified cloud-delivered protection
Check Point has developed its own prompt engineering using RAG best practices to improve the reliability and accuracy of the service
The collaboration not only evolves security management but also enhances the overall security framework, ensuring comprehensive protection in today’s cloud-centric operational landscape
Check Point Software Technologies Ltd. is a leading AI-powered, cloud-delivered cyber security platform provider protecting over 100,000 organizations worldwide.
Link: https://www.globenewswire.com/news-release/2024/03/26/2852386/0/en/Check-Point-Announces-a-New-Collaboration-with-Microsoft-to-Supercharge-Infinity-AI-Copilot-with-Microsoft-Azure-OpenAI-Service.html
SADA Achieves Over 300% Increase in Generative AI and Machine Learning Projects in 2023
SADA Systems Inc.
Globe Newswire
SADA, An Insight company and a leading Google Cloud Premier Partner, has announced continued momentum in its Generative AI (GenAI) and machine learning operations
Customers are rapidly adopting these technologies, powered by Google Cloud’s Gemini and Vertex AI platform
SADA has increased AI and ML customer projects by 306% year over year, driven by GenAI service engagements
The company has doubled pre-sales and technical resources to support customers throughout the entire process, from ideation to production to ROI
SADA has also focused on enhancing its AI and machine learning differentiations, cloud security, employee productivity and collaboration, and Google Cloud migration with VMware
The company’s team of experts will participate in speaking sessions at Google Cloud Next in Las Vegas, sharing their cloud innovations and success
Highlights:
SADA has increased AI and ML customer projects by 306% year over year, driven by GenAI service engagements
The company has doubled pre-sales and technical resources to drive GenAI opportunities and support customers throughout the entire process
SADA has focused on enhancing its AI and machine learning differentiations, cloud security, employee productivity and collaboration, and Google Cloud migration with VMware
SADA is helping organizations unlock the power of Gemini for Workspace with workshops featuring live demos, office hours, and surveys
The company’s team of experts will participate in speaking sessions at Google Cloud Next in Las Vegas, sharing their cloud innovations and success
SADA’s ongoing cross-country Cloud Transformation Tour in North America brings Google Cloud experts to discuss machine learning and Gen AI for business insights
SADA’s partnerships continued with a commitment to the ISV partner ecosystem, allowing the company to deliver deep expertise and offer complementary solutions for Google Cloud and add-on solutions.
Link: https://www.globenewswire.com/news-release/2024/03/26/2852389/0/en/SADA-Achieves-Over-300-Increase-in-Generative-AI-and-Machine-Learning-Projects-in-2023.html
Sands Lab Partners with Microsoft Korea to Develop Next-Generation AI Cybersecurity Technology
News Directory 3
Sands Lab, a Korean artificial intelligence (AI) big data security company, has signed a commercial agreement with Microsoft Korea to develop next-generation AI cybersecurity technology
Sands Lab operates “CTX”, a cyber threat intelligence service, and “MNX”, a network threat detection and response solution, and is known for having the largest malware intelligence in Asia with 2 billion pieces of malicious code and 30 billion pieces of malware analysis big data
Key points:
Sands Lab and Microsoft Korea will work on R&D and technical cooperation using generative artificial intelligence and Large Language Models (LLM) in the field of cybersecurity
Sands Lab’s datasets are provided to public institutions, private companies, and major IT security companies that need to build information security systems
The collaboration aims to use next-generation AI technologies like ChatGPT in the field of cybersecurity, with Sands Lab utilizing Microsoft’s infrastructure to advance its research and development and enter the market
Lee Woong-se, head of Microsoft’s Korean division, believes the collaboration will strengthen Sands Lab’s competitiveness in the global cybersecurity market, including Asia, by incorporating next-generation AI technology
Kihong Kim, CEO of Sands Lab, sees Microsoft as the ideal partner for the commercialization of next-generation AI technology and global performance and scalability, aiming to create an efficient and active business model for the cybersecurity environment
The partnership between Sands Lab and Microsoft Korea showcases the growing importance of AI and big data in the field of cybersecurity and the potential for collaboration between innovative security technology companies and global tech giants.
Link: https://www.newsdirectory3.com/sands-lab-partners-with-microsoft-korea-to-develop-next-generation-ai-cybersecurity-technology
RingCentral reimagines the business phone system with real-time AI
Digitalisation World
RingCentral has launched RingEX with RingSense AI, a transformative AI solution that enhances productivity and collaboration across phone calls, SMS, meetings, and messaging
The AI-powered features include real-time call notes, messaging recap, generative AI search, and more, making conversations smarter and work more efficient
Highlights:
- RingEX with RingSense AI replaces RingCentral MVP and provides intelligence across various communication channels
- Purpose-built AI is infused across RingCentral’s suite, including RingEX, RingCX, RingSense for Sales, and RingCentral Events
- RingSense AI unlocks data-rich conversations to deliver actionable insights and inform intelligent workflows
- RingCentral is among the first to offer a complete AI-infused solution across message, video, and phone
Key features include real-time note-taking, conversation intelligence, unread message recap, AI writer and translator for messages, generative AI search, and advanced meeting summaries and insights
Jim Lundy, Founder and CEO of Aragon Research, highlights the uniqueness of RingSense AI in creating notes, interpreting conversations, and automating action items within a single, consistent user experience.
Link: https://digiworld.news/news/67328/ringcentral-reimagines-the-business-phone-system-with-real-time-ai
Marriott Bonvoy embraces AI to better match travellers with their dream vacays
Kyle Barnett
N Cryptech
Homes & Villas by Marriott Bonvoy has introduced a new AI-powered search engine that helps travelers find their ideal vacation rental property based on their preferences
Developed in collaboration with Publicis Sapient, this feature allows users to search for properties using natural language processing, without specifying a particular destination
Highlights:
- The AI search tool is being tested on the Homes & Villas by Marriott Bonvoy platform, which offers 140,000 premium and luxury rental homes globally
- Users can input their preferences in the search bar, and the AI will match them with suitable properties from the platform’s listings
- The feature is expected to become widely available on the website and mobile app in the coming weeks, with the option to use traditional search functionality still available
- Sample prompts include searching for properties based on location, amenities, pet-friendliness, group size, and budget
- The AI search tool can understand context and adjust filters based on the user’s input, such as the number of guests and price range
- While the tool may not be as helpful for users with specific requirements, it is ideal for those seeking inspiration for their next vacation
The introduction of this AI-powered search feature is part of Marriott International’s broader technology transformation to provide more seamless, personalized, and engaging travel experiences for customers.
Link: https://n-cryptech.com/marriott-bonvoy-embraces-ai-to-better-match-travellers-with-their-dream-vacays
Microsoft Bing Head to Step Down Amid AI Push
Pymnts
Mustafa Suleyman, co-founder of DeepMind, was recently hired by Microsoft to lead its consumer AI efforts, with most of his startup Inflection’s staff joining him
The move is said to reflect Microsoft CEO Satya Nadella’s impatience with the company’s AI projects, despite investments in integrating AI into products like Windows, Office, and Bing
Microsoft has invested 13 billion in OpenAI, which is now valued at 80 billion, and an additional $2.1 billion in French AI company Mistral
Apple and Google are reportedly considering a deal to integrate Google’s Gemini AI engine into the iPhone, potentially shaking up the AI and search market
Michael Jaconi, CEO of AI marketing tech company Button, suggests that Apple could leverage its App Store and Apple Services to invest in AI for developer tools, app content, and search, potentially competing with Google in the future.
Link: https://www.pymnts.com/personnel/2024/microsoft-bing-head-to-step-down-amid-ai-push
Hewlett Packard Enterprise Leverages GenAI to Enhance AIOps Capabilities of HPE Aruba Networking…
Business Wire
Hewlett Packard Enterprise (HPE) has announced the integration of generative AI (GenAI) Large Language Models (LLMs) within its cloud-native network management solution, HPE Aruba Networking Central
The solution is hosted on the HPE GreenLake Cloud Platform and aims to provide enhanced AIOps network management capabilities while prioritizing data privacy and security
Highlights:
HPE Aruba Networking Central’s new self-contained set of LLM models is designed to improve user experience and operational efficiency, focusing on search response times, accuracy, and data privacy
The solution leverages telemetry data from nearly 4 million network-managed devices and over 1 billion unique customer endpoints to power its machine learning models for predictive analytics and recommendations
The new GenAI LLM functionality will be incorporated into HPE Aruba Networking Central’s AI Search feature, complementing existing ML-based AI to provide deeper insights, better analytics, and more proactive capabilities
HPE Aruba Networking Central ensures customer data security by “sandboxing” the LLMs within the platform and removing PII/CII data while delivering sub-second responses to network operations questions
The solution’s training sets for the GenAI models are up to 10 times larger than other cloud-based platforms, including tens of thousands of HPE Aruba Networking-sourced documents and over 3 million captured customer questions
The new GenAI LLM-based search engine will be available in HPE’s FY24 Q2 and is included with all tiers of licensing for HPE Aruba Networking Central
Verizon Business is expanding its managed services portfolio to include HPE Aruba Networking Central, leveraging the solution to help organizations improve network and application performance without burdening in-house IT resources.
Link: https://www.businesswire.com/news/home/20240326658938/en/
IA, spending in EMEA will increase by two-thirds in 2024
Editor Mar
Observatorial
The “CIO PlayBook 2024: It’s all About Smarter AI” report by Lenovo reveals that AI spending by businesses in the EMEA region will grow by 61% in 2024, with 97% of organizations investing in or planning to invest in generative AI
The report highlights the following key points:
40% of respondents consider AI a “game changer,” with equal investments in generative AI (25%), interpretative AI (25%), and machine learning (25%).
57% of companies have already invested in AI, while 40% plan to do so during the year
Only 3% have no intention of activating AI projects
EMEA enterprises prefer to implement AI strategies in hybrid (48%) or private (24%) cloud due to stringent data privacy regulations
Manufacturing companies are the most enthusiastic about AI (47%), while telcos are the least (22%)
Italy and the Netherlands have the highest rate of planned investments in generative AI (68%), and Italy has the least difficulty in hiring staff with AI skills (34%)
The biggest technological challenges of generative AI include model capacity limitations (40%), potential misuse and AI “hallucinations” (37%), finding a reliable data platform (36%), and using third parties (35%)
Organizational challenges include employee fears about moving the workplace (40%) and IT resistance to implementing ever-evolving AI tools and technologies (45%)
The report emphasizes that CIOs are enthusiastically embracing the potential of AI and will need to make the right technology investments and partnerships to maximize its value for their companies.
Link: https://observatorial.com/news/technology-and-science/747241/ia-spending-in-emea-will-increase-by-two-thirds-in-2024
AWS, Accenture and Anthropic Join Hands To Boost AI Adoption Among Enterprises
Accenture Newsroom
Accenture, Amazon Web Services (AWS), and AI company Anthropic are collaborating to help organizations, especially those in highly-regulated industries, responsibly adopt and scale customized generative AI technology
The collaboration aims to speed innovation, improve customer service, and increase workforce productivity while ensuring data privacy and securityKey points:
- Organizations can access Anthropic’s AI models through Amazon Bedrock and customize them using Accenture’s services and accelerators, including the new generative AI switchboard
- Accenture engineers will be trained as specialists in using Anthropic’s models on AWS, providing end-to-end support for clients deploying generative AI applications
- The collaboration has already delivered impact in the public health space, such as the Department of Health for the District of Columbia’s “Knowledge Assist” chatbot
- Industry-specific AI solutions have been developed, such as the Accenture intelligent underwriting and claims solution for insurance companies powered by Claude
- Accenture has added generative AI capabilities to its AI Navigator and GenWizard platforms, leveraging Claude and AWS to drive efficiencies for joint clients
The collaboration combines Anthropic’s focus on model performance and safety, AWS’s approach to security and reliability, and Accenture’s deep domain expertise and technical know-how to build tailored solutions for key use cases
This collaboration builds on Accenture and AWS’s long-standing relationship and Anthropic’s recent selection of AWS as its primary cloud provider.
Link: https://newsroom.accenture.com/news/2024/aws-accenture-and-anthropic-join-forces-to-help-organizations-scale-ai-responsibly