- Gianfranco's Newsletter
- Posts
- Siri Gets an MBA: Autonomous Agents and The 1-Person Unicorn
Siri Gets an MBA: Autonomous Agents and The 1-Person Unicorn
The Power of AI: Questioning Assumptions in Startup Dynamics
Essay Outline and Signpost
I. Introduction
Background: Institutional investment in Silicon Valley
Concept of "founder-product fit"
The resource gap and the autonomous agent solution
II. Autonomous Agents and the Solo Innovator
Introduction to autonomous agents
Potential for solo innovators: The 1-Person Unicorn hypothesis
Challenging conventional wisdom: Large organizations vs. solo genius
III. Historical Perspective: Cloud Computing's Transformative Impact
Comparison with cloud computing
Effects on capital allocation and startup ecosystem
IV. Understanding Autonomous Agents: A Deep Dive
A. Building Blocks
Definitions and real-world examples
Components: LLMs, Tools, Memory
B. Research and Advancements
Recent studies and human-like behavior
Strides toward AI with a theory of mind
V. Bridging Human and Machine: The Mechanism of Autonomous Agents
A. The Iterative Process
Objective determination to task execution
B. Challenges and Obstacles
Reasoning, security, deployment, and more
VI. The Future of Innovation: Implications for Founders
A. Transforming Startup Dynamics
Automating tasks: Coding, customer service, design
Future agents: System Architect, Front-End Engineer, Product Manager
B. The Emergence of the One-Person Unicorn
Comparison with cloud computing
The potential for solo innovators
VII. Conclusion
Summarizing the paradigm shift
Opportunity to reshape the future of innovation
Call to action: Embrace transformative technologies
As institutional pre-seed investors, we think about investing differently than most others in Silicon Valley.
We work with startups at their earliest stage, startups that often lack sales and, in most cases, a prototype to evaluate. Since we don’t have an operational history or metrics to diligence, our focus turns to the founders’ capabilities and the validity of their product hypothesis. Our assessment is on the alignment between the founder's background and their product’s vision - a concept we call "founder-product fit" – which refers to the ideal match between a founder's expertise and the distinctive value their product promises to deliver.
Our “scary-early” investment criteria bring us face-to-face with highly ambitious founders with grand ideas, but who struggle with the need for outside capital. It is in this challenge - the gap between vision and resources to execute - that autonomous agents could offer a capital-efficient, cutting-edge solution.
For those new to the concept, an autonomous agent can be likened to a sophisticated software guide capable of independently navigating an environment, making informed decisions, and taking actions aligned with predefined goals. Its distinguishing feature, separating it from more familiar software programs like Excel or Slack, is its ability to operate with a significant level of autonomy. In other words, it’s not dependent on continuous human input or direction to function effectively.
Harnessing the power of these intelligent entities could help ambitious founders bridge the resource gap, potentially turning traditional expectations about the resources needed for startup success on its head.
This leads us to an exciting, potentially disruptive hypothesis: Could you, armed with a clear vision and deep technological knowledge, leverage autonomous agents to become a creator of the next 1-Person Unicorn?
This notion presents a challenge to many of the principles I upheld when embarking on my investment career, principles largely informed by a cornerstone guide for Silicon Valley startups, Peter Thiel's “Zero to One.” In this book, Thiel posits that from the Founding Fathers in politics to the Traitorous Eight at Fairchild Semiconductor in business, small groups of people with a remarkable sense of mission can bend reality to their will.
This recalls George Bernard Shaw’s famous quote:
The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.
Thiel's perspective suggests that large organizations, bogged down by bureaucracy, often stifle innovation and swift execution. On the other hand, he observes that a single genius, while capable of unique work, can't bring about wide-ranging disruption alone. His solution to this paradox lies in startups. Here, small teams balance the capacity for collaboration with the ability to take decisive action, a culture that manifests itself as Silicon Valley’s motto, “Move fast and break things.” In other words, a small, driven team can create a far more significant impact than any one person could achieve in isolation.
Might we be underestimating the potential of a solo innovator empowered by advanced technology to trigger transformative disruption across industries?
At present, this belief is anchored in the limitations of technology. Yet, as a professional optimist, I predict a future in which an individual equipped with deep technological knowledge, empathy-driven product insights, and dogged resourcefulness could employ generative AI and autonomous agents in a way that leads to a 1-Person Unicorn.
To lay the groundwork for this potential future, let's reflect on a pivotal moment in recent tech history that has fueled my confidence: the introduction of cloud computing.
Parallels in Capital Allocation Improvements: Cloud Computing's Impact on Startup Constraints
My conviction stems from a (near) historical parallel: the introduction of cloud computing and the resultant proliferation of Software-as-a-Service startups in the mid-2000s.
Before Amazon Web Services’ introduction in 2006, founders required venture funding to pay for the high-cost infrastructure and physical systems that enabled their software. In this case, infrastructure was on-premise (on-prem) computing installed at physical company offices. This norm was expensive to procure and maintain, limiting the speed and flexibility for a startup to offer its services. Hardware behemoths like Cisco, EMC, and Juniper Networks became indispensable to Silicon Valley, pairing every company that created innovative software with the hardware needed to operate it.
So, you couldn't "move fast and break things" when on-prem computing required high-upfront costs and painstaking maintenance. Cautious intentionality was the modus operandi. On-prem made it tougher for startups to adapt technology to improvements or demand fluctuations.
The introduction of cloud computing was like replacing home libraries with nimble Kindles. AWS and similar services allowed startups to easily provision infrastructure on demand. Now they could nimbly spin up or terminate servers based on needs, paying only for what they used.
Conveniently, this nimble approach triggered positive externalities across the startup ecosystem, lowering the barriers to entry. The second-order effects of lower costs resulted in a surge of new startups. After 2006, VCs shrank average deal sizes by 20%, yet total investment held constant. This meant VCs could back more startups with less capital concentration, a product of new cost efficiencies.
The chart below from the Journal of Financial Economics illustrates the transformative impact of decreased computing costs following the advent of cloud computing. It fostered a climate conducive to innovation and experimentation, as suggested by the rising number of startups based on the treated (cloud computing) vs. non-treated (on-prem). Though later-stage funding levels remained stable, the relative ease and reduced costs to kick-start a venture signaled a paradigm shift in the financial dynamics of the startup ecosystem.
Exploring the influence of cloud computing has led me to believe we're on the cusp of a similar tech revolution. Autonomous agents and generative AI could unlock sequential enhancements in productivity, experimentation, and economic value, specifically benefiting founders who can best utilize these tools.
Part I: Understanding Autonomous Agents: The Interplay of Large Language Model Reasoning Engines, Tools, and Memory
This essay is segmented into two parts for clarity. Part I is a foundational overview of the building blocks behind autonomous agents, building upon our previous essay: “An Approachable Primer on the Layers of AI and LLMs.” This section serves to address the "what" behind autonomous agents. Subsequently, Part II explores the "how" - the mechanisms that render autonomous agents feasible and the potential implications of their continued development.
The Building Blocks of Autonomous Agents:
Autonomous agents are self-directed software programs.
Think of it as Siri on Steriods - able to act without human input and, sometimes, in a very meaningful way.
To illustrate, let’s consider the capabilities of two autonomous agents, "Excel Siri" and "Roadmap Siri." "Excel Siri," an agent specializing in managing financial models in Excel, adeptly aggregates and formats data from internal and external datasets, including FactSet or CapIQ. It can construct LBO or DCF models for specific companies while complying with a bank's unique branding guidelines. On the other hand, "Roadmap Siri" is programmed to craft actionable product roadmaps for startups, considering the company’s pitch deck, funding balances, accessible resources, and team abilities. Once their initial objectives are set, both "Excel Siri" and "Roadmap Siri" operate autonomously, completing their designated tasks with minimal human intervention.
Large Language Models (LLMs) play a central role in achieving these objectives. They orchestrate various tools and manage their outputs, serving as an indispensable reasoning engine.
A fully functional autonomous agent integrates several components. Besides LLMs, these include specialized tools or "agents" performing specific tasks and an extensive memory system serving as an information repository.
How do these pieces fit together? To answer this, we will first look at each element in isolation. Then, we will explore how they interact to create a sophisticated autonomous entity.
LLMs: The Makeshift Reasoning Engines
Tools: Narrow-Ability Agents
Memory: The Information Repository
LLMs: The Makeshift Reasoning Engines
Large language models (LLMs) are a type of artificial intelligence system trained on massive amounts of text data. They can understand and generate human-like text, and, from this functionality, power many of the newest AI applications, including chatbots, translation tools, and now, autonomous agents.
LLMs act as approximate reasoning engines for autonomous agents. They can decipher patterns in natural language, abstract tasks, and "reason" over a range of incoming and self-generated data. In turn, these “reasoned” insights drive subsequent tasks for agents as they serve to accomplish an objective, replacing a human’s need to evaluate each step.
Depending on their profession, few people have used an LLM to find “insights” from unstructured data. To better illustrate what I mean, I’ll pull from an example I learned while attending the Snowflake Summit in Las Vegas.
For context, Snowflake delivers a cloud-based solution that serves as a comprehensive data warehouse for businesses. Companies utilize this platform for the storage, processing, and analysis of substantial datasets. Predominantly, Snowflake's clientele includes sizable organizations such as AT&T and Fidelity, which manage petabytes of data.
Using AT&T as an example, a data scientist could be tasked with churn analysis for a subset of users. This requires analyzing intricately linked data like:
Service usage patterns across voice, video, and data (requires aggregating usage logs from multiple systems)
Customer service interactions (spanning systems for call centers, online chat, social media, email, etc.)
Account characteristics like customer tenure, income, geography, education, content preferences (from disconnected systems like billing, marketing, sales)
Deciphering the multifaceted relationship embedded within these data reservoirs demands intricate SQL queries. The task’s complexity is not limited to the advanced coding required to extract the data but also lies in making sense of the data, presenting a considerable challenge for a data scientist considering the petabytes of data that require analysis.
In this context, an LLM plays a pivotal role in transforming the labyrinth of SQL data into actionable insights. It not only navigates the complexities but also crafts a cohesive narrative behind the numbers. Recognizing nuanced patterns, such as a correlation between high complaint counts and brief customer tenures, an LLM can forecast future customer churn, thereby enhancing a data scientist's judgment. Tasks that would traditionally take a week's worth of data dissection and analysis can now be accomplished in merely minutes or hours.
Although this is just one example, it underscores the profound immense leverage an individual can gain from an LLM when confronted with massive amounts of data. It’s critical to remember, however, that in this scenario, LLMs act as makeshift reasoning engines. They don’t genuinely “reason” or possess a theory of mind, which is to say, they fall short in some basic areas, such as maintaining focus on a topic, not fabricated information, or comprehending fundamental mathematic concepts.
Nonetheless, mounting evidence suggests we are making strides toward the creation of an AI with a theory of mind.
As an illustration, Stanford University and Google conducted a joint study that demonstrated the potential of LLMs to simulate convincingly human-like behavior using generative agents. These autonomous software entities can craft responses or actions that closely mimic human behavior.
In the experiment, researchers gave agents a simple task: plan a Valentine's Day party. These agents used natural language to talk, recalled their experiences, and mapped out their plans.
The experiment’s results were nothing short of remarkable.
Agents extended party invitations to other agents, fostered new friendships, asked one another out on dates, and coordinated their attendance at the event. The intricate social exchanges between the agents highlighted the LLMs' capacity to reason across diverse social situations, a crucial attribute of typical human behavior. Notably, the study even reported unplanned complex social behaviors, where agents formed alliances or rivalries based on their interactions.
This research illuminates the extraordinary adaptability of autonomous agents and their emerging capacity to learn in a manner strikingly similar to human cognition. These advancements are propelling us ever close to where AI can mirror our ability to comprehend and reason.
Tools: Narrow Ability Agents
In agentic systems, agents - akin to specialized software programs - possess narrow, focused capabilities. Imagine them as advanced versions of temperature control mechanisms or emergency brake systems in vehicles. Each agent is custom-tailored for specific tasks, functioning autonomously, adjusting to their environment, and evolving per the unfolding situations to achieve predefined objectives. These agents generate the data which LLMs utilize to derive pertinent insights.
The value of this data becomes particularly apparent when observing how specialized agents work across diverse sectors. These tools, each generating its unique dataset, have revolutionized their respective fields:
Automated Code Generation: GitHub's Copilot suggests lines and blocks of code, reducing coding time and allowing programmers to focus on problem-solving.
Legal Document Review: LawGeex flags inconsistencies and potential contract liabilities, decreasing attorney document review time.
Pharmaceutical Discovery: Insilico Medicine's AI rapidly produces molecular templates to accelerate drug discovery and development.
These generative agents exemplify how various facets of Artificial Narrow Intelligence (ANI) can be integrated to perform specific tasks. Each agent excels at a specific, narrowly defined task, and when combined, form a more sophisticated, adaptive system.
This combination of specialized ANI models results in emergent behavior that mirrors human adaptability and understanding, allowing these agentic systems to perform tasks with enhanced efficiency and sophistication.
Memory: The Information Repository
Memory is the bedrock of self-learning – a fundamental element for agent improvement. It imparts a sense of continuity for agents, a chronological context within which they refine their abilities. By recalling past interactions, they draw from their experiences, honing their problem-solving acumen and utility to the agentic system as a whole.
Within their arsenal, autonomous agents employ two types of memory:
Episodic memory allows an agent to record its experiences and revisit them to establish improved strategies and outcomes, akin to human long-term memory.
Working memory functions similarly to human short-term memory. It assists agents in handling information in real-time during task execution, thus refining problem-solving and decision-making within the current task.
To explain more clearly, consider an agent's "goal" as a season of a TV show. Episodic memory allows a character to remember events from past episodes to progress the storyline, while working memory helps the character make immediate decisions within a single episode to achieve their goal.
Technically speaking, an autonomous agent's memory resides within a vector database. This database operates like an advanced filing system, though instead of storing data as conventional files, it organizes them as intricate mathematical constructs—vectors. These vectors encapsulate the essential characteristics of raw data such as text or images, translating them into numerical values readable for computers. This process of converting raw data into numerical representations is called embedding.
Vector databases measure the similarity between vector representations of queries and data. This eliminates the need for exact matches or predefined, rigid search criteria, saving both time and compute.
The closer the vectors, the higher their similarity. For example, 'dog' and 'puppy' would be closely positioned in a vector database, indicating their close relationship. On the other hand, 'dog' and 'van' would be set further apart, reflecting their relative, low similarity.
Source: Databricks
In the context of autonomous agents, vector databases serve as a form of "long-term memory.” They store past interactions and learnings as vectors and then inform current actions based on the similarity to this stored data, enabling the self-learning component of autonomous agents.
Part I Conclusion
The synergy between large language models, tools, and memory forms the foundation for autonomous agents, enabling them to learn, adapt, and perform tasks with remarkable independence. This rapid progress is fundamentally altering our perception of artificial intelligence and paving the way for autonomous agents to become an integral part of our daily lives.
Part II: Bridging Human Intentionality and Computational Efficiency: LLMs as a Key Driver of Autonomous Agents
Autonomous agents symbolize the potential for blending unique characteristics that distinguish humans from computers. Humans demonstrate innate intentionality and habitual behavior, excelling in strategizing and decision-making. On the other hand, computers, though devoid of this intent-driven cognition, remain instrumental aids in our everyday tasks, surpassing humans in performing complex operations like data extraction, processing, and transformation.
This blending of attributes enables autonomous agents to catalyze the transformation of computers into supplementary cognitive systems or “second brains.” In much the same way humans provide high-level directives to computers, agentic systems could likewise steer other agents toward accomplishing multi-tiered objectives, all while deriving direction from human intention.
Having explored the key components of autonomous agents, Part II now turns to the 'how' - how agents leverage LLMs and human guidance to operate and accomplish complex objectives efficiently.
The Iterative Process of Autonomous Agents
How do autonomous agents, incorporating LLMs, tools, and memory, function in tandem?
Let's walk through a demonstration of these components in action:
Objective Determination: Upon receiving a user's request, the autonomous agent platform employs a Large Language Model (LLM), such as GPT-4, to interpret and understand the user's intent.
Task Breakdown: The primary objective is segmented into manageable sub-tasks, each represented by distinct prompts. Users can review, edit, or augment these prompts using platforms like AutoGPT and BabyAGI. This structured task list is archived in a vector database, serving as episodic memory.
Task Prioritization: The LLM, functioning as a reasoning engine, allocates priority levels to each sub-task—both initially and throughout the process—as new information comes to light.
Agent Selection: Specialized agents, possessing the requisite skills ranging from web search to coding, are assigned the sub-tasks that demand specific capabilities. For tasks that transcend traditional natural language processing, these agents may employ tools like HuggingFace Transformers to access pre-trained AI models.
Execution: Armed with situational awareness via working memory, the specialized agents carry out the assigned sub-tasks.
Evaluation: Following the task completion, an "Evaluation Agent" reviews the outcomes and assesses previously accomplished tasks, leveraging episodic memory to guide its appraisal.
Iteration and Recursion: Steps 2 through 6 are reiterated until all sub-tasks have been successfully executed. Diverse agents serve varied roles, from memory retrieval to task evaluation, while others interface with external environments for data acquisition or additional tasks.
The Challenges of Autonomous Agents
Although autonomous agents hold potential, they also encounter obstacles that may render my optimistic vision less “rose-tinted.”
One major challenge lies in the reasoning capabilities of autonomous agents, particularly their struggles with interpreting context and recalling past interactions. This deficiency can have serious implications, especially in high-stakes industries such as supply chain management or healthcare, where precision, context awareness, and timing are critical. Even minor errors can trigger severe consequences, underscoring the need for improving agents' contextual reasoning and memory for successful real-world application.
Current deployment methods, which typically follow a use-and-discard approach, also hinder the performance enhancement of autonomous agents. This model fails to harness learning opportunities from interactions and errors. A promising solution could involve leveraging decentralized Web3 incentives to foster collective knowledge consolidation, which would ensure that learnings are distributed across the developer communities instead of being discarded. Awareness deficiencies also impede agent effectiveness, with improved computational, data, inter-agent, and user awareness promising to boost resource usage, decision-making, system efficiency, and user interaction quality.
The high computational resource demands of autonomous agents, particularly relevant for advanced systems like GPT-4, pose significant challenges for their widespread adoption and scalability. Solutions may lie in algorithm optimization, cloud leveraging, and exploring new iterative models. As autonomous agents' capabilities expand, potential attack vectors also increase, necessitating robust cybersecurity measures to preserve system integrity. Security threats have already materialized in hackers targeting self-driving cars, demonstrating the urgent need for proactive, security-first approaches.
Finally, there's a dichotomy between the theoretical promise and real-world usability of autonomous agents, manifested in Twitter enthusiasts offering cash rewards for practical demonstrations of agentic systems. This highlights the ongoing gap to fully realize autonomous systems' potential.
Embracing the Future: The Rise of Solo Innovators
As we overcome these challenges and continue to advance autonomous agents, we're not only enhancing the capabilities of these agents, but also facilitating a significant shift in the innovation landscape - the rise of solo innovators. These individuals, equipped with increasingly sophisticated autonomous agents, are poised to disrupt traditional business models and reshape industries.
We’ve covered much thus far, from industry parallels to building blocks, execution mechanisms, and challenges. Looking ahead, the essay explores how advancements in autonomous agents over the next decade could increase leverage for founders, redefining the potential for solo founders to disrupt industries.
Implication Examples and Use-Cases for Founders
The advent of autonomous agents has the potential to fundamentally reshape startup dynamics by eliminating traditional constraints for founders. Where entire engineering teams were needed to build and launch a product, future agents may allow solo founders to achieve similar outputs with a fraction of the resources—the pathway for solo innovators to achieve the notion of a One-Person Unicorn.
As I referenced in my essay on the primers of AI, tools like Github Copilot are already automating narrow coding tasks in a way that has made them indispensable to software engineering workflows. Those productivity gains extend to other areas beyond code generation, from customer service to design. Microsoft’s Power Virtual Agent provides an out-of-the-box generative AI chatbot capability to help with customer support and sales, augmenting the customer experience without a corresponding increase in human involvement. Similarly, Canva’s Magic Design provides an AI-driven tool that creates graphics and presentations with a few prompts and clicks.
Although we are still some distance from fully realizing this vision, I’d like to offer a glimpse of what the future could hold, with examples of how agentic systems could come together to serve as “execution copilots” for founders.
Consider a typical scenario for a founder who has recently secured early-stage funding: their first hires often include a systems architect-engineer to design and maintain the overarching software architecture, a front-end engineer to develop user-friendly interfaces, and a product manager to align the team with the product strategy and roadmap. If we envision a future with a capable agentic system composed of these autonomous agents, we could see a transformation in the traditional "input" costs for startups, upending our preconceived notions of what resources are necessary for innovation:
System Architect Agent:
This agent specializes in automated system architecture, creating robust and scalable software structures effortlessly. It incorporates the best developer tools to foster innovation and efficiently manages cloud resources to promote cost-effectiveness. Through continuous usage analysis, it optimizes system performance and adapts over time. You can trust that it never compromises security and compliance, ensuring the integrity of your system.
Front-End Engineer Agent:
This agent is dedicated to crafting intuitive and engaging user interfaces for web development that perfectly align with your brand guidelines. It uses techniques like lazy loading for optimal performance. It efficiently translates design prototypes into modular, scalable code. With rigorous testing frameworks, it identifies and resolves any browser discrepancies, resulting in a seamless user experience across platforms.
Product Manager Agent:
This agent takes your product management to new capabilities. It conducts detailed market analysis to understand customer pain points and strengthens your product's positioning through competitor studies. It consolidates all findings into a strategic memo outlining key audience insights, value propositions, and unique features. Additionally, it constructs dynamic product roadmaps and plans, leveraging team input for inclusive planning. By facilitating cross-functional agent communication, it fosters team harmony. This agent also continuously analyzes product data to identify optimization opportunities, inform decisions, and maximize market fit. With its enhanced capabilities, it accelerates your product's journey from concept to market.
Much like cloud computing removed infrastructure barriers, increasingly capable autonomous agents could be an accelerant to cost-effective, resource-efficient experimentation, ideation, and execution for solo founders. Rather than hiring teams, these ambitious founders may someday have the ability to leverage a network of specialized agents, giving rise to the One-Person Unicorn.
Conclusion
This essay aimed to challenge our traditional expectations of what it takes to achieve innovation. It explored how autonomous agents, acting as force multipliers in productivity, could empower not just small teams, but even single innovators.
The cloud computing revolution in the early-to-mid 2000s fundamentally transformed the way startups deploy software. We now stand at the precipice of a comparable paradigm shift with the advent of agentic systems.
Pioneering reasoning systems, such as GPT-4 and other sophisticated large language models, are bringing us closer to the potential of auxiliary operating systems that can augment our judgment and productivity. While current agentic systems might be more manual and frustrating, advancements in verifiable synthesis strategy, task scheduling, and path planning provide promise.
The rapid emergence of large language models and increasingly capable agentic systems presents us with a rare opportunity. We have the chance not just to reshape our assumptions but to actively mold the future we envision.
The question now is: will you merely observe as these transformative technologies evolve? Or will you, like the unyielding individuals who have redirected the course of history, harness these advancements to craft a more innovative, productive, and empowering future?
The decision is ours to make. The potential dormant in lines of code awaits our instruction. It's time to challenge our assumptions about the limits of solo innovators. The door to unrivaled human leverage has been opened - are you ready to step through?
Acknowledgments:
I wish to express my deepest appreciation to everyone who has contributed their expertise and time to the editing and enhancement of this essay. Your invaluable input and insightful contributions have been fundamental in shaping the final version of this work.
I would like to extend my specific gratitude to:
Content: I am immensely grateful to Adam Kaufman (Up2 Fund), Zoe Enright (ClearView Healthcare Partners), and Camden McRae (Nextwave X Partners).
Subject-Matter: I extend my sincere appreciation to Brandon Cui (Mosaic ML), Fan-Yun Sun (Stanford Ph.D, Computer Science), whose expertise on Generative AI and LLMs has significantly enriched the depth and accuracy of this essay.
Proofreading: A special note of thanks to Samuel Wheeler (Redacted) for his meticulous attention to detail and commitment to linguistic precision.
Sources:
https://www.sciencedirect.com/science/article/abs/pii/S0304405X18300631
https://hbswk.hbs.edu/item/amazon-web-services-changed-the-way-vcs-fund-startups
https://vitalflux.com/large-language-models-concepts-examples/
https://filice.beehiiv.com/p/how-ai-has-turned-my-vc-work-into-flow-part-ii
https://vitalflux.com/large-language-models-concepts-examples/
https://vitalflux.com/large-language-models-concepts-examples/
https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414
https://link.springer.com/article/10.1007/s11023-022-09602-0
https://towardsdatascience.com/quick-ml-concepts-tensors-eb1330d7760f
https://www.simplilearn.com/what-is-intelligent-agent-in-ai-types-function-article
https://www.sciencedirect.com/topics/computer-science/autonomous-agent
https://www.sciencedirect.com/topics/computer-science/autonomous-agent
https://www.mattprd.com/p/the-complete-beginners-guide-to-autonomous-agents
https://twitter.com/RunGreatClasses/status/1655957826873835521
https://link.springer.com/article/10.1007/s10009-022-00657-z
https://chaosengineering.substack.com/p/decentralized-artificial-intelligence
https://link.springer.com/article/10.1007/s42979-022-01043-x
https://labelbox.com/blog/how-vector-similarity-search-works/
https://www.zdnet.com/article/nvidia-uses-gpt-4-to-make-ai-better-at-minecraft/
https://deepai.org/machine-learning-glossary-and-terms/weight-artificial-neural-network
https://www.researchgate.net/figure/Deep-learning-diagram_fig5_323784695
https://www.dominodatalab.com/blog/transformers-self-attention-to-the-rescue
https://link.springer.com/chapter/10.1007/978-3-030-58298-2_2
Reply