Dimitris Kokoutsidis, Sept 26, 2024, CyberFM

Artificial Intelligence (AI) is no longer a futuristic concept; it is actively reshaping business landscapes across various industries. During Claris Engage 2024, Cris Ippolite delivered a compelling presentation on how businesses can harness AI’s power responsibly while considering potential risks. He shared real-world applications of AI, integrating pre-trained models through APIs, and touched on broader concerns like security and the ethical implications of AI—topics that dovetail with Europe’s upcoming AI regulations under the EU AI Act.

Cris Ippolite’s AI Journey

Cris Ippolite’s journey with AI started from curiosity, eventually evolving into a critical integration of AI models for business use. Initially, Ippolite explored machine learning out of personal interest but soon recognized AI’s broader business applications. This experimentation led him to integrate machine learning with FileMaker, and over time, he became a thought leader in AI’s responsible business integration.

Cris emphasized how AI models like GPT-3, GPT-4, Falcon, and other open-source models provide powerful tools for businesses, but he also stressed the importance of managing AI risks. As we’ll discuss further, these considerations are now being codified into law with regulations like the EU AI Act, which aims to ensure that AI systems are safe, lawful, and aligned with fundamental rights.

Key Themes from the Presentation

  1. AI is Not a Database—It’s a Tool: Cris made it clear that AI is not a traditional database or a storage system; rather, it is a functional tool that performs tasks such as sentiment analysis, keyword extraction, classification, and summarization. The purpose of AI is to process information and generate actionable insights, not to replace structured databases like those found in FileMaker.

This ties directly into the EU AI Act, which focuses on ensuring that AI tools are used safely and ethically. Under this regulation, businesses will need to clarify their AI systems’ purposes and ensure they are transparent, especially when integrating them with databases or making automated decisions based on AI outputs.

  1. API Calls for Secure AI Use: Ippolite explained that integrating AI models into business workflows doesn’t require hosting large models on-site. Many businesses today rely on pre-trained AI models accessed via secure APIs. This method allows companies to benefit from powerful AI models without compromising data security.

However, Cris also cautioned that while external APIs make AI more accessible, they raise concerns about data privacy and security, especially when sensitive business data is involved. The EU AI Act will enforce stricter measures to protect individuals’ rights and ensure businesses are compliant when sharing data with external AI services through APIs.

  1. Mitigating Risks and Avoiding AI Hallucinations: One of the standout points of Cris’s presentation was the risk of AI hallucinations—situations where AI models generate incorrect or misleading information. Ippolite emphasized the importance of businesses controlling AI outputs by feeding accurate data and instructions into the model to avoid these risks. He noted that retrieval-augmented generation (RAG), which combines real-time data retrieval with AI models, can help businesses ensure the accuracy of AI responses.

The EU AI Act prioritizes this concern, mandating that AI systems used in high-risk scenarios (such as hiring, insurance, or credit scoring) must provide reliable, accurate information. Businesses must ensure their AI systems are transparent and free of such hallucinations to comply with the Act.

  1. Personalization, AI Execution, and Open-Source Models: Ippolite provided a live demo showing how open-source AI models like GPT-4 and Falcon can be used securely within a business’s firewall. This is particularly important for companies that handle sensitive or proprietary data. AI personalization, where insights are tailored to specific users based on their needs, was a major highlight of his talk.

With the EU AI Act, this personalized approach could face stricter regulations, particularly in sectors like healthcare, finance, or legal services where sensitive data is processed. The Act will enforce compliance, ensuring companies maintain proper governance over AI-generated personalized insights and recommendations.

  1. The Future of AI: Open-Source and In-House AI Systems: According to Ippolite, open-source models like Falcon and LLAMA will play an increasing role for companies concerned about security and privacy. By deploying these models internally, businesses can ensure none of their data leaves the company, aligning with the EU AI Act’s principles of data sovereignty and protection.

The EU AI Act: What Does it Mean for AI in Business?

The EU AI Act is a comprehensive regulation designed to ensure that AI systems are developed and used in a way that upholds EU standards on safety, transparency, and fundamental rights. The regulation categorizes AI systems based on their risk levels, ranging from minimal risk to unacceptable risk. Businesses will be held to different standards depending on the types of AI they deploy, with a particular focus on:

  • Transparency Requirements: Businesses must disclose when AI is being used and how its outputs are being generated, ensuring users understand they are interacting with an AI system.
  • Data Quality: AI systems need high-quality, unbiased training data to avoid discrimination or misinformed outputs, a concern Cris echoed regarding AI hallucinations.
  • Security and Governance: AI systems must be secure by design, which means companies must ensure they have control over their AI systems and data. Open-source models hosted internally, as Cris demonstrated, align with this principle by preventing data from being sent to third-party servers.

Businesses using AI, especially in high-risk sectors, will need to take significant steps to comply with the EU AI Act, which means building robust AI governance structures, ensuring transparency, and using trusted data sources.

Applying Cris Ippolite’s Insights to AI Governance under the EU AI Act

Cris Ippolite’s insights into the business use of AI—particularly around mitigating risks, ensuring data privacy, and embracing open-source models—offer valuable lessons for businesses preparing to comply with the EU AI Act. As companies adopt AI, they must balance innovation with responsibility, ensuring they follow the latest regulations to protect their customers, users, and business operations.

This intersection of AI’s evolution and the impending AI regulatory landscape provides a road map for businesses to responsibly integrate AI into their workflows. By combining real-world AI applications with proactive risk management, businesses can not only meet compliance requirements but also stay ahead in the fast-evolving AI-driven marketplace.