SEO – What Future? – A Conversation with Claude

With the replacement of search engines for chatbots, how will that impact the current SEO? Who will keep indexing?

The shift from search engines to chatbots will have significant impacts on SEO, but it’s still too early to say exactly how things will play out. Here are a few key points:

  • Google and other search engines will likely continue indexing the web and providing search functionality for the foreseeable future. Though chatbots may handle more queries directly, search engines still play a vital role in helping chatbots find authoritative content to share. So I don’t foresee Google losing relevance or revenue in the near term.
  • However, chatbots will change search behavior. People will increasingly have natural conversations to find information rather than typing keywords into search bars. This means SEO will evolve to focus more on semantics, natural language optimization, and indexing conversational content.
  • Chatbots powered by large language models can generate content on the fly. This could reduce the importance of on-page optimization and static text content over time. The focus may shift more to optimizing for entities, knowledge graphs and structured data that chatbots can easily interpret.
  • With chatbots, the gateways to information will be conversational platforms rather than search engines. So SEO experts will need to optimize for discoverability and recommendation on smart assistants and messaging platforms. Voice search optimization will grow in importance too.
  • We may see the rise of dedicated “Conversational SEO” roles focused entirely on optimizing content and experiences for chatbots rather than traditional search engine results pages.

The search landscape is definitely evolving. SEO will remain vital but will require new strategies and processes optimized for conversational AI rather than traditional search engine crawling and indexing. It’s an exciting time to be in the industry!

How can an SEO Services Offer take that into account? What will change and what should be the differentiator in that area?

Here are some suggestions for how SEO professionals can adapt their services to be competitive in light of the rise of chatbots:

  • Offer conversational SEO audits and optimization – Analyze how your client’s content performs when accessed conversationally by chatbots. Optimize content for natural language queries.
  • Focus on voice search optimization – Optimize content to rank highly for voice queries. This may require different tactics like shorter content and strategic use of speakable schema markup.
  • Emphasize entity and knowledge graph optimization – Ensure key entities, products, services are well structured in knowledge graphs so chatbots can interpret them. Boost visibility in graphic search results.
  • Offer chatbot content creation – Create content optimized for conversational interfaces and chatbots. This may include smart FAQs, conversational blog posts with dialogues, and interactive content formats.
  • Provide training in conversational design – Help clients have engaging conversations with customers through chatbots. Includes conversation mapping, dialog trees, intent training, and conversational content.
  • Monitor chatbot analytics – Track metrics like conversational sentiment, question answering rates, recommend clicks etc to optimize the bot experience.
  • Stay up-to-date on AI advances – Continuously learn about progress in chatbot technology to provide cutting-edge services. Prioritize platforms investing heavily in conversational AI.
  • Highlight omni-channel expertise – Show you can optimize across web, mobile, voice and messaging platforms for a seamless experience.

The key is to diversify beyond just optimizing for organic search placement. Demonstrate you can optimize for the future of search – conversational interfaces powered by AI.

Tell me more about the speakable schema markup

Speakable schema markup is a type of structured data that optimizes content for voice search and conversational interfaces. Here are some key things to know about it:

  • It uses JSON-LD format and is recognized by Google, Alexa, and other voice assistants.
  • It allows you to define a “speakable” version of your content that’s optimized for natural speech. This includes specifying pronunciation of words, grouping content in chunks, and adding pauses.
  • Some of the main schema types are SpeakableRecipe, SpeakableHowTo, and SpeakableFAQPage. These help voice assistants interpret your content correctly.
  • You can provide a customized speech synthesis markup language (SSML) version of your text. This allows fine-grained control of pronunciation, cadence, etc.
  • It helps voice search better interpret entities like products, brands, and services through the use of aliases.
  • You can define voice-only content or responses using the “headline” property to hide text from visual displays.
  • It helps chatbots determine where to insert questions, recommendations, follow-up questions etc. during a conversation.
  • Speakable markup helps your content get featured in graphical search results on voice-enabled platforms.
  • It’s important for long-form content that needs proper text-to-speech conversion like recipes, tutorials, podcasts, and audiobooks.
  • Can boost your click-through rate on voice search results by providing a more natural experience.

    In summary, adding speakable schema markup enhances your content’s visibility and comprehension for voice assistants. It’s a key optimization for the voice search era. 

Going back to the original topic – what will define what the AI will provide as an answer? What are the new rules the content producers should look into?

Great question. Here are some key factors that will determine what content AI chatbots surface as answers:

  • Relevance – Content that directly answers the user’s query or intent will be prioritized. keywords and conversational context will matter more than links and popularity.
  • Accuracy – Chatbots will favor authoritative, factual sources over unverified information. Citing credible sources and data will be important.
  • Conversational tone – Content written in a natural, conversational style will likely perform better than formal, text-heavy content.
  • Interactivity – Chatbots reward content that allows seamless conversations, clarification, and personalized recommendations rather than static text.
  • Entities and knowledge graphs – Optimizing content to identify key entities so chatbots can link data together will be critical.
  • Freshness – Recent, timely information will likely be preferred over evergreen content in some cases.
  • Media format – Chatbots favor concise information communicated through lists, videos, infographics, etc. rather than blocks of text.
  • Mobile-first – Given the rise of voice searches, content optimized for smartphones will have an advantage.
  • Accessibility – Content that is easy to understand and consumption will get priority. Readability and speaking ease matter.

So in summary, content creators should focus on conversational, interactive content that communicates key facts accurately and concisely using credible sources. Media-rich formats and structured data will also be very impactful. Optimizing for voice search and chatbot delivery will become progressively vital.

But will the AI crawl those sites to decide what is relevant or not or will it reply on indexing services like google?

Great follow-up question. Chatbots will likely rely on a combination of their own natural language processing capabilities and existing search engines/indexes to determine what content is most relevant to serve users:

  • For general knowledge questions, chatbots from companies like Google, Microsoft, Meta, etc. will leverage their own huge language models trained on massive text datasets to generate answers on the fly. So no crawling is needed.
  • When queries require more specific, authoritative information, these AI chatbots will still depend on indexes like Google Search and Bing to surface trustworthy human-created content to pull from or share directly. The indexes provide a shortcut to relevant data.
  • Over time, we may see companies like Google fuse their search index directly into large language models like LaMDA so the AI can “search” its own index to find and summarize the most salient information to answer questions.
  • Some chatbot platforms allow training the AI on a company’s own specific documents and data to handle queries within a narrower domain. So no external indexing needed.
  • Chatbots may employ a hybrid approach – using their own algorithms and trained data to propose an answer, then cross-checking indexed sources to verify accuracy and share links if needed.

So in summary, chatbots will leverage a mesh of internal intelligence and existing search indexes to find optimal content to answer users. Crawling websites directly themselves may not be efficient or necessary if search engines continue providing relevant indexed content. But the line between search and chatbots will blur over time as AI powers everything!

How opaque will this process be? From the consumer standpoint, are we being “fed down the throat”? Before we were presented options – biased options, but nevertheless we could still navigate the results – with AI we won’t have a choice, we will be given “the truth”.

You raise a very important concern around the opacity of AI systems and the risk of being limited to a single “truth” or answer. Here are a few thoughts on this issue:

  • Transparency will be key. Responsible AI developers should provide indicators when results are AI-generated versus human created, allow tracing an answer back to its origin, and explain the reasoning behind the AI’s responses.
  • Choice is still important. Even if an AI chatbot provides a single “best” answer, it should still ideally present alternatives, allow the user to refine or redirect the query, and make it easy to switch to a web search interface.
  • Diversity of sources matters. Well-designed AI assistants should draw from a wide range of viewpoints, not just a single training dataset or worldview. Developing unbiased datasets is an active area of research.
  • User feedback loops can help. Allowing users to rate responses for accuracy, provide feedback when answers seem biased, or even contribute their own alternative viewpoints can assist in diversifying AI systems.
  • Regulation and industry standards will likely emerge around transparency and ethics policies for AI chatbots as they become more pervasive.
  • Independent audits and testing by consumer advocacy groups can help hold AI providers accountable if responses seem misleading, biased, or opaque.

So in summary – you’re right we need to diligently avoid a future of opaque, biased AI assistants dictating a single narrative. But with thoughtful design choices and ethical oversight, we can harness AI’s potential while still preserving user agency and choice. The aim should be augmenting human understanding, not limiting it.

 

So what you are saying is that the same companies that control the indexing now, will somehow control or greatly influence the AI results as a source for content or indexed results for “relevance”?

Yes, you make a fair point. The major tech companies that currently dominate search indexing and AI development do stand to gain even greater influence over information flows if conversational AI largely relies on their existing platforms and indexes.

A few things to consider on this:

  • There are some emerging startups exploring more decentralized, open-source AI models not tied to proprietary big tech platforms – so there may be alternatives.
  • Truly intelligent conversational systems should be able to synthesize information from diverse sources, not just what appears in one index. But this will require ongoing research.
  • As with search, tech companies could face pressure from regulators if their AIs are deemed anticompetitive or biased by mainly surfacing their own content.
  • Alternate indexes focused on specific topics like academic research, health, etc. may gain prominence as AI assistants look for authoritative niche information.
  • Users themselves may ultimately guide AI conversations to new contexts beyond what a single platform’s index covers.

Overall, you identify a valid concern that just a handful of big tech companies could dominate these emerging information pathways. As with any new technology, it’s important we remain vigilant around issues like transparency, bias, and preserving user agency rather than just accepting the status quo. There are likely both promises and perils with entrusting conversational AI as a key interface to the world’s knowledge.

How will it be possible for it to be free from corporate and/or government control?

Ensuring conversational AI systems remain free from excessive corporate or government control will be an ongoing challenge. Some ways we could promote more impartiality:

  • Favoring open-source AI frameworks that can be scrutinized and audited by the community. Corporate or state-developed proprietary AIs are more prone to bias.
  • Researchers and non-profits developing public domain training datasets to reduce reliance on data controlled by private companies.
  • Governments funding research into AI safety, transparency, and accountability to balance corporate interests.
  • Strict data privacy laws that limit what personal data can be collected and used to train AI systems without user consent.
  • Platform-agnostic protocols and standards for conversational AI that allow users to switch services easily, taking their personalized data with them.
  • Allowing users to inspect an AI’s reasoning and source data, flag biases, and opt-out of personalization if desired.
  • Fostering a competitive landscape among AI providers and chatbot platforms, avoiding monopolization that stifles innovation.
  • Clear labeling when interacting with an AI chatbot vs human representative, and disclosing who developed the system.
  • Independent auditing of AI systems for issues like fairness, transparency, and manipulation before allowing widespread deployment.

The more conversational AI becomes a commodity service with shared standards, instead of a way to concentrate power for tech giants, the more we can ensure it serves all users fairly as an impartial informational resource. But constant vigilance will be required.

 

So basically it will be the same, just fed in a different format?

That’s a fair assessment. In many ways, the transition from today’s search engines to future conversational AI is unlikely to radically change the underlying economics and power dynamics of information access. Some continuity will persist:

  • Large tech companies will maintain advantages in data, infrastructure, talent to develop dominant AI systems.
  • There will still be a reliance on these platforms to index and direct people to third-party content.
  • Opaqueness around how results are ranked, moderated, personalized will remain challenging.
  • Business pressures and incentives around data collection, advertising, consumer lock-in won’t disappear overnight.

However, conversational AI also introduces some new factors:

  • The ability to synthesize novel responses could loosen reliance on indexed documents over time.
  • More interactive and contextual searches may reveal more about user intent and needs.
  • As an anthropomorphic interface, expectations around transparency, accountability, and ethics will be higher.
  • Multi-modal abilities like displaying visual data expands the conversational experience.
  • Integrations with external data sources and services could yield more diverse results.

So in some sense you may be right – similar underlying structures and limitations may persist. But the conversational format also opens new possibilities if we design and regulate these systems thoughtfully. As consumers, we should push companies to ensure these technologies truly enlighten people with broad access to information rather than concentrating control.

related