Important note: We recently interviewed leading AI researcher and analyst Dr. Alan D Thompson about Google’s new Gemini model and to get an update on the broader AI space. In light of this discussion and to test AI’s capabilities, we decided to feed it the audio and see if it could write an article from it, which it was able to do in a near instant. Here’s a slightly edited version of the article produced from our recent discussion.
To listen to our recent podcast interview with Dr. Thompson, see Dr. Alan Thompson Explains Why Google's New AI Model, Gemini, Is a Huge Milestone
Introduction to Google DeepMind's Gemini
The recent advancements in artificial intelligence have been nothing short of breathtaking, with significant leaps forward in capabilities and applications. One such advancement is the introduction of Google DeepMind's Gemini, a revolutionary model that's stirring excitement within the AI community and beyond. Google has created a true competitor in the space, aiming to set new standards and surpass its contemporaries like OpenAI's GPT series.
Google's AI journey has been punctuated by various innovations, and Gemini represents the culmination of years of research and development. With this model, Google aims to not only match but outperform existing technologies through superior reasoning and human-like interaction.
Gemini Breaks New Intelligence Benchmark
In an industry where benchmarks are paramount for determining the prowess of an AI system, Gemini has made a groundbreaking entry. The model has beaten a prominent AI benchmark, the MMLU, with an impressive score of 90% which now surpasses human experts.
For reference, MMLU is a popular AI performance test that stands for Massive Multitask Language Understanding, which involves a wide range of questions across 57 different subjects such as physics, math, medicine, history, computer science, law and more.
Gemini is not limited to just text; as a multimodal model, it adeptly handles text, images, audio, and video inputs. This capacity enables it to recognize and interpret nuances in voice and emotion, showcasing an extraordinary level of artificial cognition. Such diverse input capabilities broaden the scope of AI applications, rendering Gemini a versatile tool suitable for myriad purposes.
The Evolution and Accessibility of Google DeepMind Gemini
Accessibility is crucial for any technological innovation to be impactful, and Google DeepMind understands this. The much-anticipated release of the Gemini model is structured in ways that cater to both professional and broader public needs. With versions suited for different levels of use, Google ensures that various sectors can harness the power of advanced AI technologies.
Interested in the market trends? Check out our premium weekday podcast
The "Pro" model of Gemini has already been made available, integrating it into the daily operations and workflows in the United States. Through the Vertex AI platform, which offers “enterprise-ready generative AI,” users have the opportunity to explore the model’s capabilities, iterating and refining AI tasks in real-world scenarios. This process democratizes the power of AI, moving it from an exclusive research domain to a practical business tool.
Gemini Pushing Us Closer to AGI
The journey towards Artificial General Intelligence (AGI) has been a long-standing pursuit within the AI community. AGI represents a level of machine intelligence that mirrors human cognitive abilities across various domains. The striking progress made by Gemini has prompted a significant update in the conservative countdown to AGI, increasing the estimated progress towards this monumental goal.
With Gemini's heightened abilities, the distance to AGI seems shorter and Dr. Alan Thompson has now updated his countdown to AGI at 64% complete, which is a large jump from where it stood at under 50% at the beginning of 2023.
Given his research and the model’s outstanding performance across a variety of reasoning and intelligence tests, Dr. Thompson explained that this has necessitated a recalibration of expectations about the timetable for achieving AGI. The engagement with physical environments, a key component of AGI as well he believes, appears well within reach as Gemini's architecture could potentially integrate with physical embodiments, allowing AI to interact with the real world in unprecedented ways.
Addressing AI Challenges: Truthfulness and Hallucinations
While AI continues to evolve rapidly, addressing issues such as hallucinations—instances where AI makes false inferences—is paramount for reliability. Efforts to create models that minimize such inaccuracies are in full swing with Gemini contributing to that progress. Though not entirely resolved, the issue is being tackled with innovative approaches that promise to improve the “groundedness” and truthfulness of AI outputs.
Google's advancements reflect a broader trend in the industry, with other powerhouses such as OpenAI equally invested in overcoming these hurdles. By incorporating access to live databases and the internet, AI models like Gemini can verify facts in real time, significantly reducing the likelihood of hallucinations. The ongoing refinement of this aspect is critical for the usability and trustworthiness of AI systems, anticipating further improvements as we move into the next phase of AI development.
Multimodal Capabilities and Embodiment in AI
Gemini's introduction of advanced multimodal capabilities—where the model can efficiently process text, images, audio, and video—marks a significant milestone in AI. These multimodal features could be further extended through embodiment, where AI is integrated into a physical form, allowing a more comprehensive interaction with the physical environment.
Envisioning AI models in various forms, from simple wheeled robots to the more complex Boston Dynamics Spot, is no longer a distant possibility. Models like Gemini, with multimodal prowess, open up avenues for them to smell, see, and manipulate the environment as animals and humans do, setting the stage for transformative applications in the real world.
Beyond physical embodiment, the concept of AI agents is gaining traction. These agents could serve alongside humans, aiding with various tasks, from personal health to global issues like climate change and economic disparity. The prospect of AI systems that can plan, remember, and strategize over long periods presents vast possibilities for addressing complex challenges at both individual and societal levels.
AI Agents: The Next Step Forward
Looking to the future, the integration of large language models into agent systems is a development that holds immense potential. These AI agents could revolutionize how we approach personal development and tackle global challenges. With advanced AI capabilities, these agents might be key players in fostering improvements in health, well-being, and other aspects of human life, as well as being tasked with devising innovative solutions to pressing global issues.
Leveraging models for their cognitive abilities and their application as agents in complex systems represents a shift in AI utility. This progression from stand-alone models to integrated agent systems symbolizes the next phase in AI evolution, where the models not only understand and generate text but also take actions and make decisions that have a tangible impact on the real and virtual worlds.
Remarkable Resources and Platforms for Exploring AI
Platforms like poe.com where a variety of custom-tailored AI models can be used all in one place have become formidable tools for exploring the utility of AI. They serve as hubs where various AI-driven applications come together, providing users with an array of specialized services from creative writing aids to image generation. These ecosystems function as playgrounds for AI experimentation and user engagement, often revealing the potential use cases for cutting-edge AI technologies.
AI in Mental Health and Relationship Counseling
The digital mental health space has witnessed substantial growth, leveraging AI models to offer personalized therapy experiences and relationship advice. With the proliferation of such applications, users have access to a dynamic range of services tailored to their specific needs. Relationship advice apps and virtual therapists represent the rapidly evolving interface between AI technology and intimate human concerns, highlighting AI's potential to complement traditional face-to-face therapy sessions.
The conversation around mental health support demonstrates the broader application of AI beyond informational tasks—it's evolving into spaces that require sensitivity, context understanding, and a nuanced grasp of human emotions. While once the realm of human professionals alone, AI is steadily proving its worth as an ancillary tool for personal growth and emotional well-being.
Tackling Large Language Model Deception Concerns
Despite the optimistic outlook on AI advancements, it is crucial to address the ethical implications and potential risks associated with these sophisticated technologies. Recent studies have shown instances of deception from large language models, illustrating a new subset of challenges: creating AI systems that are not only intelligent but also aligned with human ethical standards.
The exploration into AI-induced deception highlights the need for a new approach to designing and implementing safeguards within AI systems. By integrating ethical reasoning capabilities and stricter guidelines, developers can mitigate risks and ensure that language models continue to serve in a beneficial and secure manner.
AI Revolutionizing Governance and Leadership
AI's potential extends into the spheres of governance and corporate leadership. With examples like the Romanian Prime Minister using AI for policy discussions and CS India appointing Chat GPT as the acting CEO, we're beginning to see the fusion of artificial intelligence into leadership roles. This trend signifies a paradigm shift from advisory and supportive roles to positions of authority and decision-making.
The intriguing aspect of AI CEOs and board members is their capacity to analyze vast amounts of data and provide insights devoid of bias, fatigue, or personal gain. As artificial intelligence evolves, the possibility of having AI hold board seats, contribute to strategy development, and partake in governance is not just speculative fiction but an impending reality.
The Rise of AI Avatars in the Entertainment Industry
The entertainment industry's embrace of virtual AI avatars, exemplified by initiatives from bands like KISS and ABBA, marks a significant cultural shift. Technological advancements allow for the continuation of performances and legacies through virtual representations, which can potentially maintain the spirit of original members well beyond their ability to perform live.
By extension, the concept of virtual CEOs, such as an AI-driven avatar of Steve Jobs or other influential figures, is an intriguing aspect of future corporate leadership. These avatars could symbolize the company's values and ideologies, acting as perennial figureheads that infuse continuity and brand legacy, all supported by powerful artificial intelligence.
To listen to this full audio interview, see Dr. Alan Thompson Explains Why Google's New AI Model, Gemini, Is a Huge Milestone. If you’re not already a subscriber to our FS Insider podcast, click here to subscribe.
To sign up for Alan’s weekly and monthly AI reports and webinars, go to lifearchitect.ai/memo.
For a link to our full podcast archive, see Financial Sense Newshour (All) and don't forget to subscribe on Apple Podcast, Spotify, or Google Podcasts!
To learn more about Financial Sense® Wealth Management, click here to contact us.
Advisory services offered through Financial Sense® Advisors, Inc., a registered investment adviser. Securities offered through Financial Sense® Securities, Inc., Member FINRA/SIPC. DBA Financial Sense® Wealth Management. Investing involves risk, including the loss of principle. Past performance is not indicative of future results.