1080*80 ad

Vivaldi CEO Reaffirms Ban on Generative AI in Browser

Why Vivaldi’s CEO is Warning Against Generative AI: The Hidden Security and Privacy Risks

In an era where generative AI tools like ChatGPT and Google Gemini are being integrated into nearly every digital service, one prominent tech leader is raising a red flag. The CEO of the privacy-focused browser Vivaldi has taken a firm stance, banning the use of generative AI for internal company work and cautioning users about the significant risks these popular tools can pose.

This position isn’t about resisting technology; it’s about safeguarding data, protecting intellectual property, and demanding a more ethical approach to AI development. Here’s a breakdown of the critical concerns that every user and business should consider.

Your Data is the Product

The primary concern revolves around data privacy. When you enter a prompt into a public generative AI chatbot—whether it’s a piece of code, a business strategy, or a personal email—you are essentially handing that information over to the AI company. This data is often used to further train the AI model, meaning your sensitive information could become part of its knowledge base.

For a business, this is a catastrophic security risk. Submitting internal documents, source code, or financial plans to a public AI is equivalent to leaking trade secrets. Once that data is processed, you lose control over how it is stored, used, or potentially exposed in the future.

The Legal Labyrinth of AI Content

Generative AI models are trained on vast amounts of data scraped from the internet, often without the explicit consent of the original creators. This raises serious legal and ethical questions that remain largely unanswered.

Using AI to generate code, marketing copy, or other content puts businesses in a precarious position. There is a tangible risk that the AI’s output could inadvertently include copyrighted material from its training data. This exposes your company to potential copyright infringement lawsuits and complicates the ownership of AI-generated work. Until clear legal frameworks are established, relying on these tools for creative or technical output is a significant gamble.

Beyond the Hype: The Problem of AI Accuracy

While impressive, generative AI is far from infallible. These models are known to “hallucinate”—a term for producing confident but entirely false information. They can generate code with subtle, hard-to-find bugs or provide factual information that is misleading or completely inaccurate.

Relying on AI for critical tasks without rigorous human oversight can introduce serious errors into your workflow. For a company that builds complex software like a web browser, a single AI-generated flaw could compromise the security and stability of the entire product. The potential for time-consuming debugging and reputational damage outweighs the perceived benefits of speed.

A Principled Stance on Data Collection

At its core, the opposition to current generative AI models is also an ethical one. Vivaldi’s leadership argues that the widespread practice of scraping the web for training data without permission is fundamentally wrong. The foundation of many AI models is built on data collected without the consent or compensation of its creators.

This approach stands in direct conflict with a privacy-first philosophy, which emphasizes user consent and data control. By rejecting these tools for internal use, the company is aligning its actions with its core values of transparency and user trust.

Actionable Security Tips for Using AI

While the promise of AI is undeniable, it’s crucial to approach it with caution. Here are a few practical steps you can take to protect yourself and your business:

  • Treat AI Prompts Like Public Posts: Never enter personal, financial, or proprietary information into a public AI chatbot. Assume anything you type can and will be seen by others.
  • Establish Clear Company Policies: Businesses should create and enforce strict guidelines on how employees can use generative AI tools. These policies should explicitly forbid the use of sensitive company data.
  • Read the Terms of Service: Understand how an AI service plans to use your data. Some enterprise-level AI solutions may offer better privacy protections, but you must verify this first.
  • Always Verify the Output: Whether you’re using AI for research, coding, or content creation, always fact-check and review the results. Do not trust its output blindly.

Ultimately, the conversation around generative AI must evolve beyond its capabilities to include a serious discussion about its costs—to our privacy, our security, and our intellectual property. Taking a cautious, informed approach is the only way to harness the benefits of this technology without falling victim to its hidden risks.

Source: https://go.theregister.com/feed/www.theregister.com/2025/08/28/vivaldi_capo_doubles_down_on/

900*80 ad

      1080*80 ad