Risk and Fraud Analysis
Risk and Fraud Analysis
On November 14, 2024, business and AI leaders gathered at Hyatt Regency in Düsseldorf for LatentView’s exclusive event—AI Risk & Reward: Navigating the Business Landscape. The event centred around AI’s potential benefits and risks, particularly in light of the newly enacted EU AI Act, which aims to regulate and guide AI development across Europe.

WELCOME NOTE

A Glimpse into
AI Adoption in Germany and EU

Speaker
Attila Mermer
Director, EMEA,
LatentView Analytics
Risk and Fraud Analysis

The event kicked off with a look at the AI landscape in EU and Germany. Hosted by Attila Mermer, this session gave the attendees a glimpse of how Germany’s AI landscape is evolving into a critical driver of economic growth.

At a time when the country is facing challenges like high energy costs and intense global competition, AI has emerged as a potential game-changer for the nation’s future. Attila highlighted that Germany, with its export-driven economy and commitment to technological innovation, has an opportunity to outpace rivals like the US, China, and South Korea by becoming a global AI hub.

One of the most promising developments is the surge in AI investments, especially from leading US tech companies like Microsoft, pouring billions into Germany’s AI infrastructure. This investment signals a shift towards Germany becoming a hub for AI research and development.

The data speaks for itself:

  • Over 50% of German companies plan to increase their investments in generative AI (GenAI), emphasising its growing relevance in driving business transformation.
  • AI adoption among European businesses has surged by 32%, with AI set to add an estimated €600 billion in value to the European economy by 2030.

As AI continues to reshape industries and accelerate growth, Germany is positioning itself as a key player in the next wave of technological and economic advancement.

Artificial Intelligence:
10 Things to Know

Speaker
Finn Raben

Founder
Amplifi Consulting

As promising as AI’s potential is, Finn Raben took the stage to explore the other side of this powerful tech: its complexities, risks, and ethical challenges.

He discussed the growing use of AI in industries and emphasised the concerns surrounding misinformation, job displacement, and ethical oversight. Finn also shared some key insights on the differences between the US and the EU when it comes to AI regulations.

Some of the key concerns highlighted by Finn:

AI’s impact on jobs: As AI systems become more widespread, concerns about workforce redundancies are rising. While new roles 

Risk and Fraud Analysis

requiring AI expertise will emerge, the transition may leave some workers displaced. As the saying goes, “AI won’t take away your jobs, but people who know how to use AI will.” The key challenge is equipping workers with the skills to adapt to the changing landscape.

Misinformation: Consumers are increasingly concerned about AI’s role in spreading misinformation, particularly through deepfakes and manipulated content on social media.

Algorithmic bias and “Hallucination Errors”: Finn discussed instances where AI systems, like ChatGPT, have displayed bias or generated inaccurate information. He cited a case where a lawyer used ChatGPT for legal arguments that lacked factual accuracy, raising alarms about the responsible use of AI.

Ethical implications: Finn shared an example of a chatbot trained on Twitter data that quickly learned to replicate negative and offensive language, highlighting the importance of ensuring AI systems are developed and governed with ethics at the forefront. Similarly, the downsizing of ethics departments in some tech companies in favour of expanding data science teams has sparked concerns about prioritising profits over responsible AI development.

Governance and transparency: The EU AI Act lays out stringent rules for AI systems, emphasising human oversight, data integrity, and the need for businesses to comply with ethical standards.

Finn also shared an insightful analysis of the EU AI Act. The Act outlines a risk-based categorization of AI, imposing different rules depending on the level of risk the technology poses. For example, high-risk systems in critical sectors like healthcare or finance face strict requirements for transparency, human oversight, and data governance.

Additionally, the Act mandates that AI systems must be explainable, with companies required to be transparent about how AI makes decisions, especially in industries where accountability is high, such as healthcare and law.

EU vs. US Approach to AI Regulation

Feature EU US
Approach Law-based, with strict rules and penalties. Principles-based, with industry self-regulation.
Regulation High, with government oversight. Low, with minimal government intervention.
Risk assessment Mandatory, with defined risk categories. Expected, but largely self-determined by industries.
Banned systems Specific systems like biometric identification. No specific bans.
Enforcement Fines up to 7% of annual turnover. Reliant on industry compliance with self-determined principles.

Key Takeaways:

  • Risk assessment: Companies must determine the risk levels associated with their AI systems and implement appropriate safeguards.
  • Compliance obligations: Strict rules are in place regarding data usage, transparency, and accountability, with potential penalties for non-compliance.
  • Human oversight: Businesses must prioritise AI literacy, ensuring employees understand its potential and limitations. In this context, Finn emphasised the importance of the 20/10/70 rule: 20% of AI success comes from data, 10% from algorithms, and 70% from convincing people to use it. This underscores the critical role of fostering a human-centric approach to AI adoption and development.

Panel Discussion

Hype to Hypergrowth:
Deriving ROI from AI

Moderator
Sanjay Annadate
Business Head – Europe,
LatentView Analytics
Speaker
Dr. Hartmut Wittig
VP Magenta Business
API – Go-to-market & Sales,
Deutsche Telekom AG
Speaker
Dr. Thibault Geoui
Founder,
Venabili Labs
Speaker
Sean Carney
Design Leader and C-Suite Advisor, Former CDO at Philips

Following the insightful session on the EU AI Act, Sanjay Annadate from LatentView sat down with industry leaders to discuss strategies for getting past the AI hype and deriving measurable value from it while managing the risks.

Each panellist shared their experiences with AI adoption within their respective industries: telecommunications, healthcare, and consumer goods.

The discussion highlighted some key challenges businesses face when adopting AI:

  • Data privacy concerns: As AI systems collect and process vast amounts of data, ensuring privacy and security remains a critical issue.
  • Human-centric design: Panellists emphasised the importance of designing AI applications that enhance human capabilities rather than replace them.
  • Building trust: Dr. Geoui emphasised that transparency around AI’s benefits and risks is vital to gaining consumer confidence, especially when AI solutions are being deployed in public-facing environments. Dr.Wittig shared that being upfront about using AI—rather than hiding it—can build stronger trust with customers.
  • Following the shiny things: Many companies are fascinated with the “cool factor” of AI and attempt to adopt it without laying the groundwork needed for its successful implementation.

AI strategies in Germany remain a mixed bag; Dr. Wittig pointed out that while 25% of businesses have AI strategies in place, the majority (75%) are still in the early stages of adoption. The rise of GenAI tools like ChatGPT, however, is driving greater interest and could serve as a catalyst for more businesses to develop comprehensive AI plans.

The Road Ahead: Key Takeaways for Business Leaders

As AI continues to transform industries, the panellists suggested that businesses must prioritise the factors below to maximise AI’s potential. 

  1. AI inventories: Companies must create inventories of their AI systems to assess compliance with the EU AI Act and ensure that their applications are ethical and transparent.
  2. Data quality: The quality of data is crucial for effective AI implementation. Drawing from his experience, Dr. Geoui commented, “It’s frustrating when different departments within a company generate valuable data but don’t know how to use it. At the same time, the AI team, focused on cost-saving, argues that the data is too expensive to keep, without realising that data should be reusable.” He emphasised, “To realise AI’s full potential, companies must ensure their data is AI-ready by adhering to the FAIR principles: findable, accessible, interoperable, and reusable.”
  3. Love the problem, not the solution: Sean Carney encouraged business leaders to broaden their perspective and look beyond their own sector for solutions. “Someone from another industry has probably solved this issue already,” he said.
  4. Don’t take things at face value: Sean urged companies to critically evaluate AI solutions, not just by the brand/company name.
  5. Cross-functional collaboration: Successful AI adoption requires collaboration across different departments, including IT security, design, and data science. Involving these stakeholders from the outset ensures that AI systems are secure, user-friendly, and meet business needs.
  6. Focus on user needs: When developing AI solutions, it’s crucial to prioritise user needs and enhance their experiences. This involves understanding user workflows, addressing potential concerns, and ensuring that AI systems are seamlessly integrated into existing processes.
  7. The rise of AI agents: The panel discussed the growing potential of AI agents, which can orchestrate different AI models to perform more complex tasks. AI agents are expected to streamline workflows and make AI more accessible to users with varying technical expertise.
  8. Sustainability considerations: Sean emphasised the need for businesses to not just embrace sustainability but to adopt regenerative models. He believes AI can help businesses achieve these goals by optimising energy usage and promoting sustainable practices.

The panellists agreed that education, transparency, and open dialogue between businesses, policymakers, and the public are essential for ensuring AI benefits society as a whole.

Ultimately, businesses must move beyond the hype and focus on how AI can solve real-world challenges. By staying grounded in practical applications, AI can deliver substantial ROI and propel organisations toward future growth.

Solutions in Action

MARKEE

AI-driven agentic workflow across the campaign lifecycle
Read More >

BeagleGPT

Move from 1000s of dashboards to 1 GenAI app.

Read More >

AI Penpal

Improve funnel conversion with hyper-personalised emails

Read More >

InsightsIQ

Unlock the power of data with your AI-powered analytics companion

Read More >

Smart Innovation

AI-powered recommendations for product innovation

Read More >

LASER

Simplify enterprise knowledge search from hours to seconds

Read More >

Glimpses of the Event

Our Past Events

Datanomics

Datanomics: The future of financial services CX is AI-driven and human-centric

Location: New Jersey

On October 17, 2024, leaders from top financial firms convened in New Jersey for LatentView’s exclusive event—Datanomics: Powering CX 2.0 in Financial Services with GenAI. The event featured insightful discussions on the transformative potential of GenAI for customer experience and strategies for its effective implementation in the financial services industry.

Read More »

Hype to Hypergrowth: Deriving ROI from AI

Location: Menlo Park,
CA

LatentView Analytics was back in the Bay Area for the 18th edition of our roundtable, and it turned out to be our largest one yet! We had 130+ industry experts join us in Menlo Park, CA, for an engaging discussion on the theme, “Hype to Hypergrowth: Deriving ROI from AI.” The event sparked insightful conversations and brought together some of the brightest minds in the field to shift the mainstream AI narrative.

Read More »
Scroll to Top