LLMs and the Future of AI: A Global Perspective

The world of artificial intelligence (AI) is abuzz with discussions about the potential and implications of Large Language Models (LLMs). A recent Twitter Space discussion titled “Biden Cracks Down | China’s $5Bn Nvidia Splurge #AITownHall” delved deep into this topic, with experts from various fields sharing their insights.

Geopolitical Implications and China’s AI Ambitions
The conversation took a geopolitical turn when Robert Scoble highlighted China’s significant investment in Nvidia chips. “China’s move isn’t just technological; it’s strategic. They’re positioning themselves as global AI leaders,” he remarked. Graham dePenros chimed in, noting, “China’s approach to AI, especially LLMs, is holistic. They’re looking at the bigger picture, from data collection to real-world applications.” Marcus, reflecting on the geopolitical implications, said, “In the short term, there’s no question that throttling China’s chip supply makes a difference. But I wonder what the long-term reaction is from a country that has that many resources and can’t afford to be cut out of this.”

Drawing on news of China’s new programme ‘Artificial Intelligence for Science’ to promote AI, Eugene Chung highlighted the broader implications: “China’s AI strategy isn’t just about outpacing the West. It’s about setting global standards and norms.” As reported in the South China Morning Post article: “The central government has been promoting research and development in AI and has pledged favourable policies to drive more investment. AI was identified as a key industry in China’s controversial ‘Made in China 2025’ industrial plan in which it was stated that China’s goal was to become a global leader in the field by 2030.” China, as the world’s second-largest economy, is rapidly advancing in the AI domain, leading in academic publications, patents, and both cross-border and global AI funding. A projection by the consulting firm IDC from last year suggests that by 2026, China’s investment in AI will likely touch US$26.69 billion, representing approximately 8.9% of worldwide investment. This positions China as second in the world for AI investment.

The Ethical Quandary
The ethical dimensions of LLMs were a significant point of discussion. “With great power comes great responsibility,” said Boucher, emphasizing the potential risks associated with unchecked AI capabilities. Scoble also raised concerns about misinformation, stating, “LLMs can be double-edged swords. Their ability to generate content is unparalleled, but what if they’re used to spread falsehoods?” Chung responded with a call for global collaboration: “We need international standards and guidelines. AI is a global phenomenon, and its regulation should be too.” Marcus weighed in on the ethical dimensions, highlighting the need for global collaboration and understanding. He remarked, “We need some coherent understanding. The UN is forming a body to try to think about AI in a broader perspective. We don’t want a piecemeal thing that’s just sort of everybody putting their autograph on some piece of legislation.”

The Power and Potential of LLMs
Vincent Boucher, an AI enthusiast, kicked off the discussion with a powerful statement: “The power of LLMs is undeniable. We’re seeing capabilities that were once thought to be decades away.” His sentiment was echoed by Chung, who added, “LLMs are not just about understanding language; they’re about understanding human thought.”

Gary Marcus, however, emphasized the importance of software over hardware in the AI race. He stated, “I think it’s a risk that it’s just going to make trying to move faster making these chips. But the war here is not over hardware, but over software. You could have all of the chips in the world with large language models, and they still have a lot of deep and fundamental problems.”

The Debate Over Multiple LLMs
The value of having multiple LLMs was a contentious topic. Scoble remarked, “The sheer scale of these models is mind-boggling. But do we need so many? It’s a question of quality over quantity.” Brian Roemmele countered, “Diversity in LLMs is essential. Different models bring different strengths and perspectives.” @Neuro_tarun added, “It’s not just about having multiple models. It’s about how they interact, complement, and sometimes even compete with each other.” Boucher countered, “In the world of AI, diversity is strength. Multiple LLMs mean multiple perspectives, which can only be a good thing.”

Marcus, diving deep into the debate, stated, “We don’t want the world training 195 large language models, destroying the environment in the process. We actually do want to have some alignment in the original sense of that word around what are best practices here.” He also emphasized the importance of reasoning over symbolic information, saying, “There’s a lot yet to be figured out. We don’t have very good control of large language models. We don’t want our fate of the universe to be in the hands of whether somebody wrote the right prompt or not.”

Conclusion: The Road Ahead
The future of AI, particularly LLMs, is a complex tapestry woven with potential, ethical considerations, geopolitical implications, and a profound reflection on what it means to be human. Zolayola CEO Patrick O’Connor-Read warns of the dangers of over-relying on LLMs, stating, “The danger with LLMs is we over-rely on them to provide deep insight to domains we have superficial understanding of, with the result that we are unable to discern full correctness from partial correctness. LLM’s actually need skillful handling, direction and redirection, to arrive at a useful outcome for non-trivial matters.” His concerns echo the broader debate over the nature of human cognition and reasoning, and the role of emotions and experiences in our understanding of the world.

Co-CEO/COO, AI & Partners Michael Charles Borrelli emphasizes the transformative potential of LLMs, noting, “LLMs hold the transformative potential to revolutionize communication, content generation, and knowledge dissemination across industries.” However, he also acknowledges the complexities surrounding the debate over multiple LLMs and the need to balance specialized AI models with resource allocation.

O’Connor-Read’s call for transparency resonates with Borrelli’s perspective on the ethical quandary of AI. O’Connor-Read asserts, “I strongly believe humans need a way to peer inside the data flows, algorithmic processes, and indeed the biases present in the data’s sourcing/handling. Political bias and editorial censorship are real risks.” Borrelli adds, “Navigating AI’s ethical quandary demands a delicate balance between innovation and safeguarding human rights and values.”

The geopolitical landscape is also a critical consideration, with Borrelli highlighting, “China’s AI ambitions are redefining global power dynamics and raising concerns about data sovereignty and technological influence.” O’Connor-Read’s scepticism about enforceable standards complements this view, suggesting that an industry code of conduct might be a more pragmatic approach.

Perhaps most poignantly, O’Connor-Read reflects on the legacy of AI, stating, “Ironically, the greatest legacy of AI could well be a renaissance of human reasoning, valuing our ability to fuse reason and emotion, intuition and experience.” This sentiment encapsulates the essence of the AI journey, a path that leads us not only to technological advancement but also to a deeper understanding of our own humanity. In the end, the road ahead for AI and LLMs is one of exploration, innovation, reflection, and responsibility. It’s a journey that challenges us to harness the power of technology while honouring the uniquely human values that define us.

The Twitter Space discussion underscored the complexities and potential of LLMs in the global AI landscape. As Boucher aptly summarized, “We’re on the cusp of an AI revolution, and LLMs are leading the charge.” Marcus, with a forward-looking perspective, concluded, “We will cross the mountain, it’s just a matter of getting over our addiction to LLMs and being ready to look for something new. But what I’m optimistic about is that we will do that and that will completely change AI hopefully for the better.”

Follow on LinkedIn

AI in Journalism: A Paradigm Shift or a Passing Phase?

Featured

The integration of Artificial Intelligence (AI) into journalism is not just a technological advancement; it’s a paradigm shift both for the profession and for the business of making news. The Frontline Club event ‘Rise of the Machines: AI in Journalism’ on 24 May shed light on this transformation, raising questions that are both intriguing and essential. As AI’s role in newsrooms continues to grow, the landscape of journalism is undergoing a profound transformation, raising questions that are both intriguing and essential.

Generative AI: the double-edged sword
Generative AI, exemplified by tools like Chat GPT, is rewriting the rules of content creation. Reuters Institute digital journalist Marina Adami‘s observation captures the zeitgeist: “We’re at the cusp of a new era. AI isn’t just a tool; it’s becoming a collaborator.” This collaboration, however, comes with its challenges. The potential of AI to fabricate content that’s indistinguishable from human-generated content is both its strength and its vulnerability.

Dr. Bahareh Heravi added depth to this perspective, noting, “Generative AI is like fire. It can warm your house, but it can also burn it down.” The implications are clear: while AI can enhance journalistic capabilities, unchecked use can erode the very foundation of trust and authenticity.

“Over the last decade we’ve seen increasing use of AI throughout the media value chain; from metadata creation to sentiment analysis to recommendation engines. Generative AI is just the next iteration in this process,” said Patrick O’Connor-Read. Elaborating on the role of Large Language Models (LLMs), he added: “LLMs offer an evolution within this wider AI trend, but they are prone to hallucination. So the workflow will shift, rather than machines supporting humans, humans will support machines, providing input guidance and output verification/sensechecking.”

The trust equation: balancing innovation with integrity
For institutions like the BBC, trust is not just a value; it’s their currency. “In the age of misinformation, our commitment to truth is unwavering. But how do we integrate AI without compromising that trust?” David Caswell, former Executive Product Manager at the BBC, pondered during the Frontline Club event. Parmy Olson, a Bloomberg Opinion columnist covering technology, with a particular focus on AI and chatbots, echoed this sentiment, “The challenge isn’t just about using AI responsibly; it’s about communicating its use transparently to our audience.” This transparency is the linchpin that holds the trust equation together. The traditional model of journalism was linear: journalists create, and consumers consume. AI is disrupting this linearity.

Patrick O’Connor-Read, who has navigated both sides of this equation, producing TV and digital content for over ten years with companies like Zatzu and researching applied AI, offers a unique perspective. He points out, “It’s an open secret that the business models of many media outlets constrain time intensive human research; a lot of this low to mid level work can be automated and achieve near parity outcomes, at least for perfunctory output.”

As David Caswell shared, “It’s no longer just about what we want to say; it’s about how the audience wants to hear it.” This shift towards consumer-centric content is revolutionary. Marina Adami highlighted the transformative potential of this shift: “Imagine a world where news adapts to you, not the other way around. That’s the promise of AI in journalism.” However, with this promise comes the responsibility of ensuring that customization doesn’t lead to echo chambers, a concern raised Heravi, a senior academic at the Institute for People-Centred AI at the University of Surrey.

O’Connor-Read believes that while news outlets are personality-led, the emergence of synthetic personalities is imminent. “News outlets are personality led, people return to their favourite brands and hosts who provide a clear point of view. These sources of authority will continue to command attention, but it is only a matter of time – weeks and months more than years and decades – before synthetic personalities emerge able to process and synthesize information beyond the capabilities of a human, and attract and retain an audience.” He continues to explore the boundaries of what pure/total AI formats look like with his current venture, Zolayola, focusing on blockchain & applied AI.

The automation wave
The article from The Economist, titled ‘The Third Wave of AI in Journalism,’ highlights the initial phase of AI in journalism as automation. Machines have been assisting in delivering news for years. The Associated Press began publishing automated company earnings reports as early as 2014. The New York Times leverages machine learning to decide how many free articles to show readers before they hit a paywall. This automation wave has been primarily data-driven, focusing on generating news stories from structured data like financial reports and sports results.

Augmentation and analysis
The second wave, as described by computational journalist Francesco Marconi in the Reuters Institute article titled ‘ChatGPT: Threat or Opportunity for Journalism?’, shifted the emphasis to augmenting reporting. AI was used to analyze large datasets and uncover trends. This wave saw the use of machine learning and natural language processing to provide deeper insights and context to news stories. The Argentinian newspaper La Nación‘s use of AI to support its data team is a prime example of this phase.

Generative AI: a double-edged sword?
The third wave, which we are currently experiencing, is characterized by generative AI. These are large language models capable of generating narrative text at scale. While they offer applications beyond simple automated reports, they come with their own set of challenges.

While Madhumita Murgia recently appointed AI editor at the FT points out that while generative AI can synthesize information and make edits, it lacks the necessary originality and analytic capability that distinguishes quality journalism.

The future: collaboration or replacement?
The overarching sentiment from all sources is that while AI has a significant role to play in the future of journalism, it cannot and should not replace human journalists. AI can assist, augment, and even automate certain tasks, but the human touch, analysis, and intuition remain irreplaceable.

The future of journalism in the age of AI is not about machines taking over but about journalists and AI working in tandem to deliver accurate, timely, and insightful news to the masses.

Illustration: Lindsey Bailey/Axios

Follow on LinkedIn