You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here at Sunholo, we've specialised in deploying GenAI applications for the past few years. Recently, when talking to new prospects we have noticed a trend: they show us their own internal chatbot, built at great expense just 18 months ago, and ask why it feels already outdated compared to ChatGPT or Gemini. Is there a better way to keep on the cutting edge but still keep your AI application bespoke? The answer takes us on a journey through web history, emerging protocols, and a future that's arriving faster than most realize.
@@ -32,23 +34,25 @@ MIT's recent study claimed that 95% of generative AI pilots fail to achieve rapi
32
34
33
35
-[MIT report: 95% of generative AI pilots at companies are failing | Fortune](https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/)
34
36
35
-
But that tells only part of the story. What they don't say is why. One reason could be that to go beyond the data available in the models training data so that its output was relevant, AI tools needed custom integration with internal data sources, and home-spun solutions to connect quickly become outdated as the rapid pace of improvements outpaces developer project time. Without your own unique UI or data there is little reason to develop a poorer version than the already available AI applications.
37
+
But that tells only part of the story. What they don't say is why.
38
+
39
+
One reason could be that self-build AI applications need to have unique user interfaces or unique data to be any better than the generic AI applications. If you don't have either of those (preferebly both) then you are going to be going head-to-head with hyperscalers with more resources than you. To be of value, your internal AI tools needed custom integration with internal data sources and custom UIs, and home-spun solutions can quickly become outdated as the rapid pace of AI abilities outpaces developer project time.
36
40
37
-
What's the answer? Standardisation. It was that experience from thousands of AI deployments across the world that gave the feedback necessary for standard protocols to emerge.
41
+
What's the answer? Standards. It is the experience from thousands of AI deployments across the world that has enabled AI commmuity feedback necessary for standard protocols to emerge.
38
42
39
43
## The Protocols Emerge (MCP & A2A)
40
44
41
-
In November 2024, Anthropic released the [Model Context Protocol (MCP)](https://www.anthropic.com/news/model-context-protocol). It addressed the emerging need for AI industry standards for what everyone was building, but in slightly different ways. MCP isn't particularly unique in its properties—much like HTTP wasn't. The value of a protocol comes only if and when it gains widespread adoption.
45
+
In November 2024, Anthropic released the [Model Context Protocol (MCP)](https://www.anthropic.com/news/model-context-protocol). It addressed the emerging need for AI industry standards for what everyone was building, but in slightly different ways. MCP isn't particularly unique in its properties, much like HTTP wasn't. The value of a protocol comes only if and when it gains widespread adoption.
42
46
43
-
Anthropic was the perfect source for this protocol. They're respected by developers for their model's coding capabilities but also neutral enough, being deployed across Google Cloud, AWS, and Azure, to be trusted by everyone.
47
+
Anthropic was the perfect source for this initiative. They're respected by developers for their model's coding capabilities but also neutral enough, being deployed across Google Cloud, AWS, and Azure, to be trusted by everyone.
44
48
45
49
Then in April 2025, Google announced the Agent2Agent (A2A) protocol, backed by 50+ tech companies
46
50
47
51
*[Announcing the Agent2Agent Protocol (A2A) - Google Developers Blog](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/).
48
52
49
53
While MCP connects AI to tools and data, A2A defines how AI agents collaborate. It's the difference between giving workers tools and teaching them to work as a team.
50
54
51
-
The A2A protocol again would only be of value if it was not a lock-in to only Google AI, so it was deliberately kept vendor-neutral, transferred to the open source [Linux Foundation project](https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/) and so is also endorsed by competitor AI companies such as AWS, Salesforce, ServiceNow, Microsoft and IBM. Like HTTP before it, A2A's value only emerges through universal adoption.
55
+
The A2A protocol again would only be of value if it was not a vendor lock-in to Google, so it was deliberately kept vendor-neutral, transferred to the open source [Linux Foundation project](https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/) and so is also endorsed by competitor AI companies such as AWS, Salesforce, ServiceNow, Microsoft and IBM. Like HTTP before it, A2A's value only emerges through universal adoption.
52
56
53
57
<ProtocolComparison
54
58
title="From HTTP to A2A: How Protocols Shape Technology Eras"
0 commit comments