The European Union is at a pivotal moment in shaping the future of Artificial Intelligence (AI) regulation, a journey marked by intense debate, international influences, and the looming shadow of US technological dominance. As we delve into this complex landscape, it’s crucial to understand the multifaceted aspects of the EU’s legislative process, the challenges it faces, and the global implications of its decisions.
The Current State of EU AI Legislation
The EU is in the final stages of negotiating the AI Act, a groundbreaking piece of legislation aimed at regulating AI applications, particularly those deemed high-risk. The recent trilogue discussions among the Council, Parliament, and Commission have made significant progress, especially in classifying high-risk AI applications and overseeing powerful foundation models. However, contentious issues remain, such as the specifics of prohibitions and law enforcement exceptions.
The Hiroshima AI Process and International Standards
Parallel to the EU’s efforts, the G7 leaders, under the Hiroshima AI process, have agreed on International Guiding Principles and a voluntary Code of Conduct for AI developers. These principles aim to ensure trustworthy AI development and complement the EU regulations. They focus on risk mitigation, responsible information sharing, and a labelling system for AI-generated content.
Challenges and Disagreements
Despite these advancements, the AI Act faces significant challenges. Negotiations recently hit a roadblock due to disagreements from major EU countries over the regulation of foundation models like OpenAI’s GPT-4. Countries like France, Germany, and Italy, influenced by their AI startups, fear over-regulation could hinder their competitiveness.
The Evolution of the AI Act
It’s essential to trace the AI Act’s evolution to understand its current state. Initially, the European Commission’s draft in April 2021 did not mention general-purpose AI systems or foundation models. However, feedback from stakeholders, including the Future of Life Institute, led to the inclusion of these aspects. The Act has since evolved, with various amendments focusing on high-risk AI systems and the obligations of foundation model providers.
The Role of Powerful Foundation Models
Recent developments have highlighted the need to regulate powerful foundation models. The Spanish presidency’s draft proposed obligations for these models, including registration in the EU public database and assessing systemic risks. This approach aims to balance innovation with safety and ethical considerations.
The Impact of US Competition
The EU’s legislative process is significantly influenced by the competition from US tech giants. European AI startups, like Mistral and Aleph Alpha, lag behind their US counterparts in resources and development. This disparity raises concerns about the EU’s ability to compete globally in the AI sector. The fear is that stringent regulations might further widen this gap, favoring US companies like OpenAI and Google.
Equinet and ENNHRI’s Call for Enhanced Protection
In a significant development, Equinet and ENNHRI jointly issued a statement urging policymakers to enhance protection for equality and fundamental rights within the AI Act. Their recommendations include ensuring a robust enforcement and governance framework for foundation models and high-impact foundation models, incorporating mandatory independent risk assessments, fundamental rights expertise, and stronger oversight.
Looking Ahead: The Final Trilogue and Beyond
The next trilogue session on December 6, 2023, is crucial. It will address unresolved issues and potentially shape the final form of the AI Act. The Spanish presidency aims for a full agreement by the end of 2023, but disagreements could push negotiations into 2024, especially with the European Parliament elections looming.
The EU’s journey in regulating AI is a delicate balancing act between fostering innovation, ensuring public safety, and maintaining competitiveness on the global stage. The outcome of the AI Act will not only shape the future of AI in Europe but also set a precedent for global AI governance. As these negotiations continue, it’s vital to keep an eye on how these regulations will evolve in response to technological advancements and international pressures.
For more detailed insights and ongoing updates, refer to the links provided:
SingularityNET is a decentralized platform and marketplace for artificial intelligence (AI) services. It is designed to democratize access to AI technologies and to create a thriving AI ecosystem. Its approach is potentially the best way to create a thriving AI ecosystem in the face of big tech and the struggles of the open source approach.
Elon Musk’s new Grok AI could potentially pose a challenge to SingularityNET’s decentralized approach. Grok AI is also a decentralized platform, but it is controlled by a single entity, Elon Musk. This could give Grok AI an advantage over SingularityNET in terms of speed and agility. However, it could also make Grok AI more susceptible to censorship and manipulation. Additionally, Grok AI is not committed to open source software. This could make it less appealing to open source AI developers.
Grok has real-time access to info via the platform, which is a massive advantage over other models.
Big tech’s hold on AI
However, big tech companies such as Google, Microsoft, and OpenAI currently have a dominant position in the AI industry. They control the largest AI datasets and have the resources to develop and deploy the most advanced AI models. This gives them a significant advantage over smaller AI companies.
“The nature of using 3rd party created and controlled LLM’s means that you do not – subject to the license employed* – truly own the output of your work. Your data informs their system intelligence, you build value into their eco-system, albeit while benefiting from their considerable R&D. But where is your long-term defensible value creation?”
This observation by UK-based AI expert Patrick O’Connor-Read underscores the challenges faced by smaller AI companies when they rely on big tech’s resources and models. They might not have true ownership of their AI outputs, raising questions about long-term sustainability.
The struggles of open source AI
The Llama-led open source approach to AI promoted by Meta has many benefits. It allows more people to contribute to the development of AI technologies and it makes AI more affordable to develop and use. However, the open source approach also has some challenges. One challenge is that it can be difficult to coordinate the development of large and complex AI models. Another challenge is that open source AI models are often vulnerable to misuse and abuse. Indeed, from a commercial perspective, “On Wall Street, Llama is hard to value and, for many investors, hard to understand”.
So why is this comparison between the decentralized approach to AI and big tech AI important? Well consider the insights from the 31 October Techmeme Ride Home podcast:
“Let’s imagine a scenario where the biggest AI models will always win, because they’re the biggest have the most data, or the best trained artists always the best, the most accurate, or whatever. In that scenario, this is already a done deal, the companies owning the biggest models will win this space.
“But what you have to think about is, they’re not sharing the secret sauce of these models, the biggest players like Open AI. And because of that, maybe nobody can challenge their supremacy without huge costs, because they can’t reverse engineer the secret sauce.
“Thus, the move towards open sourcing everything in order for 1000 flowers, if you will to bloom. So what I’m saying is some folks are worried that there are already incumbents here in this nascent AI space, Open AI, Anthropic, a couple others, this thing could already be an oligopoly, before it even really got started.
“That’s also why the VC class is pushing the open source narrative. VCs need a whole ecosystem of startups to rise up and bloom for this to be an investable space. If this new space is already closed off by the first movers, it’s dead, at least for investors.“
In response to such open source AI concerns a Meta spokesperson said: “We believe in open innovation, and we do not want to place undue restrictions on how others can use our model. However, we do want people to use it responsibly. This is a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse.”
“Open Source is a misnomer. Current practises more closely resemble tribal allegiances, all serving their respective corporate puppet master, than they do an egalitarian utopian ideal,” says O’Connor-Read.
“Decentralisation is imperfect – disorganised, duplicated effort, inefficient allocation of resources – but for this messiness you get freedom, emergent beauty from self-organising network intelligence. I am reminded of Churchill’s line – ‘democracy is the worst form of government, except for all the others.’ And so it is with decentralisation,” he adds.
O’Connor-Read’s insight highlights the imperfect nature of decentralization but also its potential for emergent beauty in self-organizing networks. It’s akin to democracy, imperfect but often the best available option.
SingularityNET’s decentralized approach
SingularityNET’s decentralized approach aims to address the challenges of both big tech and open source AI. By decentralizing the development and deployment of AI, It can level the playing field for all AI companies, regardless of their size or resources. Additionally, the decentralized approach can help to mitigate the risks of misuse and abuse of AI technologies.
SingularityNET at Cardano 2023, along with Desdemona, also known as “Desi,” a humanoid robot and the lead vocalist of the Jam Galaxy Band. 😎
In the evolving landscape of artificial intelligence, the recent alliance between SingularityNET and Dfinity Foundation marks a pivotal shift towards a decentralized AI network. This groundbreaking venture, leveraging smart contracts and the Internet Computer (ICP) blockchain, stands out for its unique capabilities. As the recent article on Medium looking at this new alliance notes, “This collaboration harnesses the prowess of smart contracts and the Internet Computer (ICP) blockchain, perhaps the best and currently undervalued blockchain in existence. It can simply do things all the rest cannot do.”
Decentralized AI offers several key advantages over traditional, centralized AI systems. The Medium article highlights, “Unlike its traditional AI counterparts, often governed by corporate giants like Microsoft or Google, DeAI operates without the dominion of a singular entity.” This autonomy leads to increased transparency, greater accessibility, and a reduced risk of censorship. Furthermore, the collaboration between SingularityNET and Dfinity Foundation is poised to render DeAI models more transparent, accessible, and secure, thus propelling the responsible and ethical utilization of AI.
Connecting AI systems: HyperCycle
As reported in Cointelegraph on 22 November, “the AI industry is dominated by large corporations and institutional investors, making it difficult for individuals to participate. HyperCycle, a novel ledgerless blockchain architecture, emerges as a transformative solution, aiming to democratize AI by establishing a fast and secure network that empowers everyone, from large enterprises to individuals, to contribute to AI computing.”
HyperCycle introduces a novel ledgerless blockchain architecture, which is a transformative solution in the AI space. This architecture, powered by layer 0++ blockchain technology, enables rapid and cost-effective microtransactions among a diverse network of AI agents. These agents are interconnected, collaboratively solving complex problems without the need for intermediaries. This “internet of AIs” concept allows AI systems to interact and collaborate directly, addressing the fragmentation and slow processes prevalent in the current AI landscape.
One of the most significant contributions of HyperCycle is its ability to democratize AI. By establishing a fast, secure network, it empowers not just large enterprises but also individuals to contribute to AI computing. This inclusive approach is crucial for the development of a truly global and accessible AI ecosystem. HyperCycle’s HyperAiBox, a compact, plug-and-play device, further democratizes AI by enabling individuals and organizations to perform AI computations at home, reducing their reliance on large data centers.
SNET has several advantages over other approaches to AI ecosystems:
Decentralization: Its a decentralized platform, which means that it is not controlled by any single entity. This makes it more resistant to censorship and manipulation.
Open source: Its committed to open source software. This means that all of its code is freely available for anyone to use and modify. This transparency helps to build trust in the platform.
Scalability: SingularityNET is designed to be scalable. It can support a large number of users and services. This is important for creating a thriving AI ecosystem.
Flexibility: Its flexible enough to accommodate a wide range of AI technologies and services. This makes it a good choice for both big tech companies and open source AI developers.
How SingularityNET could win over big tech and open source
It could potentially win over both big tech and open source with its decentralized approach by offering the following benefits:
For big tech:
It can help big tech companies to reach new markets and to develop new AI products and services. SingularityNET can also help big tech companies to comply with increasingly stringent regulations on AI.
For open source AI developers:
The decentralized AI network can help open source AI developers to monetize their work and to reach a wider audience. SingularityNET can also help open source AI developers to protect their work from misuse and abuse.
SingularityNET’s decentralized approach to AI ecosystems is potentially the best way to create a thriving AI ecosystem in the face of big tech and the struggles of the open source approach. It offers a number of benefits to both big tech companies and open source AI developers. By decentralizing the development and deployment of AI, it can level the playing field and create an ecosystem where everyone can thrive.
Examples of how SingularityNET could win over big tech and open source:
 Big tech companies could use it to develop and deploy new AI products and services in a more agile and efficient way. For example, a big tech company could use it to develop a new AI model for medical diagnosis. The company could then make the model available to other companies and organizations through the SingularityNET marketplace.
 Open source AI developers could use SingularityNET to monetize their work and to reach a wider audience. For example, an open source AI developer could create a new AI model for natural language processing. The developer could then publish the model on the marketplace and charge a fee for users to access it.
 It could help to protect open source AI models from misuse and abuse. For example, it could implement a system for licensing AI models. This system could require users to agree to certain terms and conditions before they are allowed to access a model.
Overall, SingularityNET’s decentralized approach to AI ecosystems has the potential to revolutionize the way AI is developed and deployed. By leveling the playing field and creating an ecosystem where everyone can thrive, it could help to usher in a new era of AI innovation.